Engineering of Failure

This article is for all the engineers who design and develop products across the world for the CONSUMERS. True engineers have this urge, a strong desire to keep developing new stuff, which not only makes this world more beautiful but also helps them pay their bills. Little we are aware that this “engineering” is designed to FAIL. The products we engineers develop or the technology we use has a “Life Span” or an “Expiry Date” which is dictated by the Consumer Society and Corporates.

I am a movieaholic! Of all the things in the world, there is one thing that I can do at least once a day is watching movies. Speaking of movies, if you’ve never watched FIGHT CLUB please do it. It teaches us the basic mistakes that we do in our everyday lives.

fight-club-quotes-we-buy-things-we-dont-need

Now, let’s discuss a simple everyday problem which we all face : Strong desire to buy a new smart phone.

You’ve got the latest and costliest phone of the market, with all the shiny features but within a year you will plan to buy another one. Trust me, there is nothing wrong with your mind, this is how our brains are dictated (by the society) to work. This “strong desire” is cultivated or rather implanted in our brains by the business minds of our world. It is like the idea of fashion and music. The fashion to sport a particular style of clothes along with listening to a particular style of music has been the default identity of a period of time or an era. For example Bell Bottom Pants and Rock Music was THE thing in the 70s – 80s. Now is the time of ever changing and fast evolving technologies, smart phones, gadgets, social media and Electronic Dance Music. While change is good, we need to understand why it is good? and How it is practically achieved? Well, we achieve this by a combination of FAILURE of something and marketing of something else as BETTER.

aec2fa02573cb911b7ad6ea82e38df11

All the above things may sound obvious but they are NOT. There is a very interesting theory behind this called as The Light Bulb Conspiracy . Following is a famous documentary related to this theory:

According to this theory, products are designed to fail after certain period of time. Same is true with technology. No matter how talented are the engineers of any era, their talent will always be dictated by the consumer based society and business men who think of nothing but profit. So whenever something great is created it will be degraded somewhat before presenting it to the world, so that the product dies slowly, making way for something new which could lead to monetary profit rather than a technical one.

A classic example is of Apple Inc. They tell the world that there is a sheer brilliance in their product but after a while they will mark the same product as ‘inferior’ and will stop supporting them, forcing their own customers (rather consumers) to buy their new, costlier products. The aura and hype of Apple Inc. is another strategic plan to keep the company’s sales up.

Similar trends can be seen with the technology. Take for example Web Development. Every year something new comes into the picture for developing “cutting edge” websites. But popular frameworks or technologies of today may vanish tomorrow, engineers will never know. This is the direct result of the “Light Bulb Conspiracy”. Because of these “profit making” strategies, Software Engineers are in constant fear of losing their jobs leading them to keep learning new technologies every now and then. The consequences paints a picture with it’s own pros and cons. On one hand there is always restlessness in the IT world but on the other hand it keeps and average IT professional’s brain alive and active.

One argument may be that new technologies provides better and efficient Software Development techniques. This is partially true. While the new programming languages / techniques / tools / frameworks do provide easy approaches of Software Development, this may also kill the jobs of the skilled IT professionals who have dealt with the ‘hard way’ of Software Development.

awesome-fight-club-quotes-if-you-feel-like-shit

The IT industry and technical companies are increasingly becoming clever in selling “old stuff in new packaging”. The biggest example is Cloud Computing. As an official article by IBM says and I quote :

It (cloud computing) was a gradual evolution that started in the 1950s with mainframe computing.

Multiple users were capable of accessing a central computer through dumb terminals, whose only function was to provide access to the mainframe. Because of the costs to buy and maintain mainframe computers, it was not practical for an organization to buy and maintain one for every employee. Nor did the typical user need the large (at the time) storage capacity and processing power that a mainframe provided. Providing shared access to a single resource was the solution that made economical sense for this sophisticated piece of technology.

This is one big HOPE that IT has given us that even if according to the Light Bulb Conspiracy a particular technology is meant to die after sometime, we can always redesign the business model and present it with some other context to the world to consume it again. The people of the world will happily consume it if an urgent necessity and “fashion to consume it” is shown to them in a proper manner. Then we too can sit back with a popcorn and enjoy the show!

62454172

New improved web scraper | Python | Selenium | BeautifulSoup | PhantomJS

Improved version of A Simple Web Crawler or Web Scraper
In this version the “Browser” part is minimized and I have used PhantomJS as a headless web browser. You can also find it on gist.

# -*- coding: utf-8 -*-
'''
Created on May 27, 2016

@author: abgupta
'''
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
import time, sys, traceback
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup

class Scraper(object):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
self.url = 'https://sjobs.brassring.com/TGWebHost/home.aspx?partnerid=25667&siteid=5417'
self.base_job_url = 'https://sjobs.brassring.com/TGWebHost/jobdetails.aspx?'
self.browser = webdriver.PhantomJS(executable_path='phantomjs.exe',
desired_capabilities=webdriver.DesiredCapabilities.HTMLUNITWITHJS,
service_args=['--load-images=no']
)
self.first_page_search_opening_id = 'srchOpenLink'
self.second_page_search_btn_id = 'ctl00_MainContent_submit2'
self.next_link_id = 'yui-pg0-0-next-link'

#Spinner
def DrawSpinner(self, counter):
if counter % 4 == 0:
sys.stdout.write("/")
elif counter % 4 == 1:
sys.stdout.write("-")
elif counter % 4 == 2:
sys.stdout.write("\\")
elif counter % 4 == 3:
sys.stdout.write("|")
sys.stdout.flush()
sys.stdout.write('\b')

def first_page(self, url):
try:
self.browser.get(url)
#link = self.browser.find_element_by_link_text('Search openings')
link = self.browser.find_element_by_id(self.first_page_search_opening_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_id(self.second_page_search_btn_id))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_search_button(self):
try:
#Click search button
link = self.browser.find_element_by_id(self.second_page_search_btn_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_next_button(self):
try:
#Click NEXT
link = self.browser.find_element_by_id(self.next_link_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def get_page_source(self):
page_source = self.browser.page_source.decode('utf8')
#         f = open('myhtml.html','a')
#         f.write(page_source)
#         f.close()
return page_source

def get_job_info(self, new_browser, job_url):
try:
new_browser.get(job_url)
html = new_browser.page_source
soup = BeautifulSoup(html, 'html.parser')

find_error = soup.find('div', attrs={'id' : 'errorid'})
if find_error:
return 1
#Find designation
data = soup.find('span', attrs={'id' : 'Designation'})
if data:
#print data.text
f = open('jobs.txt','a')
f.write(data.text + ' :: ')
f.close()
else:
pass

#Find Qualifications
data_ql = soup.find('span', attrs={'id' : 'Qualification'})
if data_ql:
#print data_ql.text
f = open('jobs.txt','a')
f.write(data_ql.text + ' :: ')
f.close()
else:
pass

#Find Removal Date
data_rd = soup.find('span', attrs={'id' : 'Removal Date'})
if data_rd:
#print data_ql.text
f = open('jobs.txt','a')
f.write(data_rd.text + '\n')
f.close()
else:
pass
return 0
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
return 1
def get_jobs(self):
try:
h = HTMLParser()
html = h.unescape(self.browser.page_source).encode('utf-8').decode('ascii', 'ignore')
soup = BeautifulSoup(html, 'html.parser')
data = soup.findAll('a', id=lambda x: x and x.startswith('popup'))
counter = 0
for a in data:
if a.has_attr('href'):
counter = counter + 1
#self.DrawSpinner(counter)
try:
return_code = self.get_job_info(self.browser, self.base_job_url + a['href'].split('?')[1])
if return_code == 1:
#In case the error pages starts to come
return
except Exception:
continue
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_pages(self, actual_page_number):
for i in range(10, actual_page_number + 1, 4):
try:
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(actual_page_number)))[0].click()
print 'Page number ff', str(actual_page_number), 'clicked'
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
return
except Exception as e:
try:
#If the actual page number is not on the screen
print 'page', str(actual_page_number), 'not found. i==', str(i), str(e)
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(i + 4)))[0].click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('yui-pg-pages'))
except:
continue

def main(self):
self.first_page(self.url)
self.click_search_button()
self.get_jobs()
try:
for i in xrange(2, int(5274/50) + 1):
try:
self.first_page(self.url)
self.click_search_button()
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
print 'Starting iteration again...'
#Ignore this iteration and start this iteration again
i = i - 1
continue
if i > 10:
#Click page 10
self.browser.find_elements_by_css_selector('a[page="{}"]'.format('10'))[0].click()
self.click_pages(i)
elif i != 1:
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(i)))[0].click()
print 'Page number', str(i), 'clicked'
self.get_jobs()
#                 for _ in range(1, i+1):
#                     self.click_next_button()
except Exception as ex:
print 'exception= ', str(ex)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

if __name__ == '__main__':
start_time = time.time()
sys.stdout.flush()
sys.stdout.write('\b')
Scraper().main()
sys.stdout.flush()
sys.stdout.write('\b')
end_time = time.time()
print 'Processing Time = ',  str(end_time-start_time)

A Simple Web Crawler or Web Scraper

 

webscraping

A web crawler (also known in other terms like ants, automatic indexers, bots, web spiders, web robots or web scutters) is an automated program, or script, that methodically scans or “crawls” through web pages to create an index of the data it is set to look for. This process is called Web crawling or spidering.

There are various uses for web crawlers, but essentially a web crawler is used to collect/mine data from the Internet. Most search engines use it as a means of providing up-to-date data and to find what’s new on the Internet. Analytics companies and market researchers use web crawlers to determine customer and market trends in a given geography. (source)

I wrote a simple web crawler for a particular site for the purpose of data mining in Python. I used Selenium and BeautifulSoup4 for this purpose, probably the best combination in this business.

Following is the complete code, you can also find it on gist. The best part about this code is that it is fail safe and the crawler won’t stop even if it encounters an error. A true crawler in a way.

# -*- coding: utf-8 -*-
'''
Created on May 27, 2016

@author: abgupta
'''
from selenium.webdriver import Firefox
from selenium.webdriver.support.ui import WebDriverWait
import time, sys, traceback
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup

class Scraper(object):
    '''
    classdocs
    '''
    def __init__(self):
        '''
        Constructor
        '''
        self.url = 'https://sjobs.brassring.com/TGWebHost/home.aspx?partnerid=25667&siteid=5417'
        self.base_job_url = 'https://sjobs.brassring.com/TGWebHost/jobdetails.aspx?'
        self.browser = Firefox()
        self.first_page_search_opening_id = 'srchOpenLink'
        self.second_page_search_btn_id = 'ctl00_MainContent_submit2'
        self.next_link_id = 'yui-pg0-0-next-link'

    #Spinner
    def DrawSpinner(self, counter):
        if counter % 4 == 0:
            sys.stdout.write("/")
        elif counter % 4 == 1:
            sys.stdout.write("-")
        elif counter % 4 == 2:
            sys.stdout.write("\\")
        elif counter % 4 == 3:
            sys.stdout.write("|")
        sys.stdout.flush()
        sys.stdout.write('\b')

    def first_page(self, url):
        try:
            self.browser.get(url)
            #link = self.browser.find_element_by_link_text('Search openings')
            link = self.browser.find_element_by_id(self.first_page_search_opening_id)
            link.click()
            # wait for the page to load
            WebDriverWait(self.browser, timeout=100).until(
                lambda x: x.find_element_by_id(self.second_page_search_btn_id))
        except Exception as e:
            print 'exception= ', str(e)
            print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def click_search_button(self):
        #Click search button
        link = self.browser.find_element_by_id(self.second_page_search_btn_id)
        link.click()
        # wait for the page to load
        WebDriverWait(self.browser, timeout=100).until(
            lambda x: x.find_element_by_class_name('t_full'))

    def click_next_button(self):
        #Click NEXT
        link = self.browser.find_element_by_id(self.next_link_id)
        link.click()
        # wait for the page to load
        WebDriverWait(self.browser, timeout=100).until(
            lambda x: x.find_element_by_class_name('t_full'))

    def get_page_source(self):
        page_source = self.browser.page_source.decode('utf8')
        f = open('myhtml.html','a')
        f.write(page_source)
        f.close()
        return page_source

    def get_job_info(self, new_browser, job_url):
        try:
            new_browser.get(job_url)
            html = new_browser.page_source
            soup = BeautifulSoup(html, 'html.parser')

            #Find designation
            data = soup.find('span', attrs={'id' : 'Designation'})
            if data:
                #print data.text
                f = open('descriptions.txt','a')
                f.write(data.text + '\n')
                f.close()
            else:
                pass

            #Find Qualifications
            data_ql = soup.find('span', attrs={'id' : 'Qualification'})
            if data_ql:
                #print data_ql.text
                f = open('descriptions.txt','a')
                f.write(data_ql.text + '\n')
                f.close()
            else:
                pass
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def get_jobs(self):
        try:
            h = HTMLParser()
            html = h.unescape(self.browser.page_source).encode('utf-8').decode('ascii', 'ignore')
            soup = BeautifulSoup(html, 'html.parser')
            data = soup.findAll('a', id=lambda x: x and x.startswith('popup'))
            #print data
            counter = 0
            for a in data:
                if a.has_attr('href'):
                    counter = counter + 1
                    self.DrawSpinner(counter)
                    try:
                        self.get_job_info(self.browser, self.base_job_url + a['href'].split('?')[1])
                    except Exception:
                        continue
            print counter
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def main(self):
        self.first_page(self.url)
        self.click_search_button()
        try:
            for i in range(1, int(5309/50) + 1):
                self.get_jobs()
                f = open('myhtml.html','a')
                f.write(self.browser.page_source)
                f.close()
                self.first_page(self.url)
                self.click_search_button()
                for _ in range(1, i+1):
                    self.click_next_button()
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
if __name__ == '__main__':
    start_time = time.time()
    sys.stdout.flush()
    sys.stdout.write('\b')
    Scraper().main()
    sys.stdout.flush()
    sys.stdout.write('\b')
    end_time = time.time()
    print 'Processing Time = ',  str(end_time-start_time)

Python | A simple prototype for tracking human movement

At times social networking sites, job portals and even the government need to keep a tab over the movement of the people from one place to another (or one nation to another). Governments do this analysis on a regular basis to check the rate of urbanization.

Assumptions: Let’s take the example of a Social Networking site. Here I have assumed that we have the following table (In Hadoop or any other Big Data framework of your choice) having the data of people moved from one place to another.

id                                           int #Autogenerated ID
uid                                         int #User ID
some_other_id                  int
cur_location                       int  #Location ID mapped to “Location”  table

So everytime an user updates his/her location we do an INSERT into the database rather than an UPDATE. This way we have the record of the user’s movement in chronological order which can be retrieved using his “User ID”.

Architecture:

Retrieve data [Hadoop Layer using Hive]

Process it into meaningful data [Python]

Store results [MySQL] (So that it is easy to use for displaying purposes)

Python Mining Engine:

Here is a simple python program I wrote for this simple application. In actual real world scenarios you might have to use or invent some complex data mining algorithms. For the purposes of this blog post I successfully used the python script for processing 35,000 records successfully. Here is the complete code (you can also check it out here ):

__author__ = 'Abhay Gupta'
__version__ = 0.1

import pyhs2
import time
import MySQLdb
import datetime
import sys
import pytz

def DATE_TIME():
    return datetime.datetime.now(pytz.timezone('Asia/Calcutta'))

def FORMATTED_TIME():
    return datetime.datetime.strptime(str(DATE_TIME()).split('.')[0], '%Y-%m-%d %H:%M:%S')

#Spinner
def DrawSpinner(counter):
    if counter % 4 == 0:
        sys.stdout.write("/")
    elif counter % 4 == 1:
        sys.stdout.write("-")
    elif counter % 4 == 2:
        sys.stdout.write("\\")
    elif counter % 4 == 3:
        sys.stdout.write("|")
    sys.stdout.flush()
    sys.stdout.write('\b')

#Generator
def neighourhood(iterable):
    iterator = iter(iterable)
    prev = None
    item = iterator.next()  # throws StopIteration if empty.
    for next in iterator:
        yield (prev,item,next)
        prev = item
        item = next
    yield (prev,item,None)

#hive cursor
def get_cursor():
        conn = pyhs2.connect(host='',
               port=10000,
               authMechanism="PLAIN",
               user='hadoop',
               password='',
               database='test')
        return conn.cursor()

def get_mysql_cursor():
        conn = MySQLdb.connect(user='db', passwd='',
                              host='',
                              db='test')
        return conn.cursor()

def get_records():
        cur = get_cursor()
        cur.execute("select * from user_location_history")
        #Fetch table results
        return cur.fetchall()

def get_user_movement():
        #Initializing
        location_dict = {}
        #Fetching all the records
        records = get_records()
        counter = 0
        for record in records:
                counter = counter + 1
                DrawSpinner(counter)
                if location_dict.has_key(record[1]):
                        location_dict[record[1]].append(int(record[3]))
                else:
                        location_dict[record[1]] = [int(record[3])]
        return location_dict

#For performance improvements use list instead of dictionary as we don't need count of the users here
def prepare_movement_data():
        #Initializing
        city_movement_data_dict = {}
        user_location_dict = get_user_movement()
counter = 0
        for user_id, user_movement_path in user_location_dict.iteritems():
                counter = counter + 1
                DrawSpinner(counter)
                if len(set(user_movement_path)) > 1:
                        if city_movement_data_dict.has_key(tuple(user_movement_path)):
                                city_movement_data_dict[tuple(user_movement_path)] = city_movement_data_dict[tuple(user_movement_path)] + 1
                        else:
                                city_movement_data_dict[tuple(user_movement_path)] = 1
        return city_movement_data_dict

def store_mining_results(unique_movement_map_tuple):
        sql_query = None
        insert_flag = False
        update_flag = False
        cur = get_mysql_cursor()
        for prev, current, next in neighourhood(unique_movement_map_tuple):
                #Execute query
                cur.execute('select * from user_flow_location')
                #Handling the empty table case
                fetched_data = cur.fetchall()
                if len(fetched_data) == 0 and prev is not None and current is not None and prev != current and current != next:
                        sql_query = 'insert into user_flow_location(location_from, location_to, count)\
                                                values('+  str(prev) + ',' + str(current) + ', 1 )'
                        cur.execute(sql_query)
                else:
                        insert_flag = False
                        update_flag = False
                        for record in fetched_data:
                                if record[2] == prev and record[3] == current:
                                        update_flag = True
                                        sql_query = 'update user_flow_location set count = ' + str(record[3] + 1) +\
                                        ' where id = ' + str(record[0])
                                        cur.execute(sql_query)
                                elif prev is not None and current is not None and prev != current and current != next:
                                        insert_flag = True
                if update_flag == False and insert_flag == True:
                        #Insert only if the entry doesn't exists
                        #A quick fix. It can be improved later.
                        cur2 = get_mysql_cursor()
                        cur2.execute('select * from user_flow_location where location_from = ' +\
                                         str(prev) + ' and location_to = ' + str(current))
                        if len(cur2.fetchall()) == 0:
                                sql_query = 'insert into user_flow_location\
                                                (location_from, location_to, count)\
                                                values('+  str(prev) + ',' + str(current)+', 1)'
                                cur.execute(sql_query)
                        cur2.close()
        cur.close()

if __name__ == '__main__':
        start_time = time.time()
        #spinner = spinning_cursor()
        print 'Preparing data...'
        city_movement_data_dict = prepare_movement_data()
        sys.stdout.flush()
        sys.stdout.write('\b')
        print '\nData prepared for mining.'
        print 'Processing data and storing results...'
        counter = 0
        for unique_movement_map, unique_user_count in city_movement_data_dict.iteritems():
                #print str(unique_movement_map), ' : ', str(unique_user_count), ' user(s) relocated through this path'
                store_mining_results(unique_movement_map)
                counter = counter + 1
                DrawSpinner(counter)
        sys.stdout.flush()
        sys.stdout.write('\b')
        end_time = time.time()
        print '\nData mining complete!'
        print '\nTotal execution time = ', str(end_time - start_time), ' seconds\n'

How to import all the files from a directory using Python

python-logo

 

 

 

STEP 1 : Execute the following command in the folder from which you need to import files from:

$ touch __init__.py

Or simply create a blank file __init__.py in the folder.

STEP2 : Prepare your __init__.py. The idea is to enable __init__ to know about all the files of your directory.

Paste the following code in __init__.py

import os
modules = []
file_list = os.listdir(os.path.dirname(__file__))
for files in file_list:
    mod_name, file_ext = os.path.splitext(os.path.split(files)[-1])
    if file_ext.lower() == ‘.py’:
        if mod_name != ‘__init__’:
            modules.append(files.split(“.”)[0])

__all__ = modules

STEP 3:  Import in the desired file present on some other location

Include the following code on top your file:

import os
dir_name = ‘foo.bar’
file_list = os.listdir(dir_name)
for files in file_list:
    mod_name, file_ext = os.path.splitext(os.path.split(files)[-1])
    if file_ext.lower() == ‘.py’ and  mod_name != ‘__init__’:
        exec “from {0} import {1}”.format(dir_name + files.split(“.”)[0], files.split(“.”)[0])

Easy to make DIY 3D Hologram using Smartphone or Tablet | 3D Pyramid Hologram

Everyone loves Iron Man! And admit it, we all want Tony’s super cool lab, gadgets and of course Jarvis in our lives.

Sadly, most of us are not a billionaire like him and so cannot afford to own such things. But we can at least simulate (a bit) the super cool 3D holograms he used to design Iron Man suit etc.
Iron-ManAVENGERS

Star-Wars

Even in the cult classic Star Wars we have seen R2D2 projecting secret message in the form of a hologram, weird aliens attending meeting “virtually” etc etc.

How about creating one for yourself?
Here’s what you need :

  1. Old DVD case
  2. A paper cutter
  3. Graph paper or just plain paper
  4. Pencil and a ruler

IMG_0185

Steps to make your cool 3D hologram:

  1. Cut out a trapezium from a paper of the following dimensions:
    trapezium
  2. Place this shape on the CD case and cut out four such trapeziums. I suggest using a sharp cutting tool or knife. Use paper cutter for marking the outline. Be careful and do not break the plastic.
  3. Arrange these four plastic trapeziums into a pyramid and stick them together using adhesive tape or super glue.
    pyramid
  4. Voila! Your very own 3D hologram projecting pyramid is ready!

Now all you need to do is carefully place it over a smart phone’s or tablet’s screen while playing this video. You can try other videos too available online but this one is my personal favourite.

Check out my first 3D hologram pyramid too :

Courtesy

Dynamic Field Generation in Web Applications.

                                                                                                                          Dynamic Field Generation

Dynamic field generation refers to generating user defined fields into web applications. Often users are not satisfied with the options provided by default for example in a simple feedback form of any product. User may want to give feedback on some particular thing not available by default. It can be done simply by using following technique.

STEP 1: Paste this code in the <head> section of the web page (it can be HTML, PHP, JSP, JSF or any similar language).

<script language=”javascript”>
var i=0;
function changeIt()
{
i++;

my_div.innerHTML = my_div.innerHTML +”<br><input type=’text’ style=’font-family:Comic Sans MS’ name=’mytextbox”+i+”‘> <p><label><input type=’radio’ name=’mytext”+i+”‘ id=’Poor’+ value=’poor’ /></label><span class=’buttons’>Poor</span><label><input type=’radio’ name=’mytext”+i+”‘ id=’avg’ value=’average’ /><span class=’buttons’> Average</span><input type=’radio’ name=’mytext”+i+”‘ id=’good’ value=’good’ /></label><span class=’buttons’>Good</span><label><input type=’radio’ name=’mytext”+i+”‘ id=’excellent’ value=’excellent’ /></label><span class=’buttons’>Excellent </span> </p>”
if(i>20){
alert(“Huh! Let me refresh please!”); //RESTRICTING USERS TO ADD ONLY 20 FIELDS BECAUSE OF DB LIMITATIONS
window.location.href=”http://localhost:8081/SamePage.jsp&#8221;;//ENTER THE URL OF THE SAME PAGE TO GET DISPLAYED AGAIN AFTER REFRESHING THE PAGE

}

}
</script>

 

STEP 2: Now in your web page under <form> tag add the following code according to the design.

<input type=”button” value=”Add Field” id=”button”  onClick=”changeIt()” style=”color: #FFFFFF; background-color:#600;width: 65px;height: 40px;”/>

THAT’S IT! 🙂  Image

Dynamic field (or form elements) generation and deletion.

 

dynamic-add-delete-row-table-javascript

The following code will take your web page one step ahead by providing the privilege of deletion of generated elements too!

<script type=”text/javascript”><!– BEGIN HIDING
/* A simple demonstration on how to create and delete, textBox with an onclick event. Hope you had great time! Enjoy coding… */
var x = 0;
document.onclick = function( add )
{ add = add ? add : window.event;
var button = add.target ? add.target : add.srcElement;
if(x<20){
if (( button.name ) && ( button.name == ‘add’ ))
{ x = x +1; document.getElementById(“box_numbers”).value=x;
_form = document.getElementsByTagName(‘form’)[0];
_text = document.createElement(‘input’);
_button = document.createElement(‘input’);
_div = document.createElement(‘div’);
_div.id = ‘div’;
_text.type = ‘text’;
_text.size = ’17’;
_text.value = ‘Item’;
_text.id = ‘text’;
_text.name=”text”+x;
_div.appendChild(_text);
_button.type = ‘button’;
_button.name = ‘b1’;
_button.id = ‘b’+x;
_button.value = ‘Delete this item’;
_div.appendChild(_button);

_form.appendChild(_div);
}}
if(x>20){
alert(“Huh! Let me refresh please!”);
window.location.href=”http://hpl-11-09:8080/charitra/participate.jsp&#8221;;

}
if ( button.name && button.name == ‘b1’ ) { _form.removeChild(document.getElementById(‘div’));
}
}
//–>
</script>

We can also define the CSS for proper designing :

 

<style type=”text/css”>

<!–
form input[type=button] {height:22px;width: 120px;background-repeat: no-repeat;background-color:#635757;border-radius:20px;border-color:#635757; color:white;font-weight: bold; }
–>
#contentbg{
background-image: url(“images/bg2.jpg”);
background-repeat: no-repeat;

}
#div{

width: 760px;height: 23px;
background-image: url(‘images/bg3.jpg’);background-repeat: repeat-y;
margin-left: -75px;

}
</style>

 

Generally this simple privileges to the users makes your web pages more interesting and real time. I used this technique to find out certain missing patterns which I used for developing a system that used to predict certain improvements in an already existing system. The best thing is that it is helpful in taking data  from the user which helps in predicting what the user needs and can be used for suggesting improvements in the existing systems by taking dynamic inputs from the users. The picture here is related to CRM systems.