Top 10 promising Indian actors of the online world

Young India is ONLINE! Gone are those days when we used to stick our eyes to the stupid box, the Television. We have other boxes now, the “smart” ones.

Lately, the online world has seen a lot of talent fuming up in India and I can’t help but notice that some of them are going to be BIG BIG names of the entertainment industry in the near future. How can I say that? Because this is what happens in the rest of the world. One such example is of Tim Bergling (better known as Avicii). His story made me believe that there is a HUUUUUGE potential out there in the online world.

There are many doctors, cooks, actors, singers, techies etc who have shown their passion online and have received a lot of support from the “Young Social India”. But mostly we all have enjoyed the comedians and actors who have proven themselves to be a major stress busters in our lives. None of us mind taking a small break during work and enjoy or discuss new episodes of our favorite web series.  Not only they entertain us but also raise some important social concerns from time to time.

I’m a great fan of these artists and I’m sure you are too. Let’s admire some of them. Remember these are “Promising Actors” NOT “Comedians”.

Don’t agree with the list? Please comment.
Have I forgotten any name? Please suggest with reasons to improve the list.

10 . Angira Dhar

tumblr_o0gv0tF1fm1v15tzoo5_500
Remember the hot and talented Shahana from Bang Baja Baraat?  With banner like Yash Raj Films she had a great start.  Already featured in the advertisements of Cadbury and Domino’s she has a bright acting career ahead. She successfully bagged a not so famous movie (Ek Bura Aadmi) as well.

9. Abish Mathew

ABISH MATHEW

Loaded with a range of emotional expressions he is quite dynamic on stage. Abish has acted in many YouTube videos but he is popularly know for his hilarious show – Son of Abish. He was offered the role of Mandal in TVF’s Pitchers, which he denied. That didn’t made any difference as he is doing increasingly well day by day with his own show. The problem is that he just want to remain a comedian only.

8. Deepak Kumar Mishra

maxresdefault
The talent of Deepak Kumar Mishra should not go unnoticed. An actor should be versatile and he has proven this notably in Rowdies XXX and Permanent Roommates. FYI he is the director of one of  the episodes of  Barely speaking with Arnub and 4 episodes of Permanent Roommates.

7. Nidhi Bisht

Nidhi Bisht

Extremely talented and popularly known for Permanent Roommates and Tripling Tiago, she quickly brings a smile to everyone’s face. She will be appearing in Phillauri alongside Anushka Sharma and Diljit Dosanjh this year. She did an unforgettable Meenakshi Lekhika in Bollywood Aam Aadmi Party.

_f5682644-e50a-11e6-947f-9490afc24a59
Two words for her – “Pure Talent”. Already shooting for Brij Mohan Amar Rahe along side Arjun Mathur she is already on the right track. Not to mention her extremely appreciated role as Tanya in Permanent Roommates. If you are a fan then you must watch her less popular The Drama of the Dagger.

4. Sumeet Vyas

grid-cell-10946-1472549884-4
A popular name now because of his long list of strong performances. Mostly known for Permanent Roommates, Tripling Tiago and Bang Baja Baraat, he has also acted in English Vinglish. He has definitely more to offer and he is not settling down anytime soon.

3. Naveen Kasturia

6096630-2
He is not a surprise on this list. Known for his performances in TVF Pitchers, Pure Veg and Rowdies XXX, very few people know that he has been assistant director in movies like Shanghai, LSD and Jashnn. He has proven that he can do a wide variety of roles and that is why he enjoys a great fan following.

2. Jitendra Kumar

Jitu4
No introduction needed for “Jeetu”. He is the beloved Munna Jazbaati for all his fans. His role of Arjun Kejriwal is still unmatched by anyone. Besides good acting he has a charm of his own (charm of a common man) which leaves a lasting impression. He has a long list of unforgettable performances and a great fan following. He is known for his work in Munna Jazbaati, TVF Pitchers, Permanent Roommates, Tech Conversations with Dad, How to train your dad, How to train your son and Bollywood Aam Aadmi Party: Arnab’s Qtiyapa. He has also acted in the movie A Wednesday.

1. Bhuvan Bam
f8Hvamrh
BB is a living example that success can be achieved only by believing in yourself. He is extremely talented but most importantly he is having the right attitude. He shoots all his videos alone, plays all his characters, sings and writes all by himself. All a middle class person has is his family and friends. His friends never believed in him and he couldn’t explain his work to his parents. He took all the chances in this world and achieved a lot in just a short period of time. He is absolutely a nobody in the business of showmanship but something tells all of us that he is here to stay and achieve a lot. He is known for his BB ki Vines and TVF’s Bachelors series. It is because of his versatile acting, already established TVF, AIB and other groups have approached him. He has already made it clear that he will target Bollywood in the near future.

Honorable Mentions-

addtext_com_MTMwNjE0MzIyNzEy

addtext_com_MTMzNTQ2MjU2MQ.jpg

addtext_com_MTMyOTQxMjk3MDAx

addtext_com_MTM0NjA3MzEwNw.jpg

addtext_com_MTMxMjE3Mjk1NzUz

addtext_com_MTMxODMwMjk2MTYx.jpg
addtext_com_MTMyMDMwMjk2Mjcz
addtext_com_MTMyNjE1Mjk2NTE3.png

sorry.jpg

Note – All opinions are personal. Emphasis is on ACTING and STRUGGLE.

Engineering of Failure

This article is for all the engineers who design and develop products across the world for the CONSUMERS. True engineers have this urge, a strong desire to keep developing new stuff, which not only makes this world more beautiful but also helps them pay their bills. Little we are aware that this “engineering” is designed to FAIL. The products we engineers develop or the technology we use has a “Life Span” or an “Expiry Date” which is dictated by the Consumer Society and Corporates.

I am a movieaholic! Of all the things in the world, there is one thing that I can do at least once a day is watching movies. Speaking of movies, if you’ve never watched FIGHT CLUB please do it. It teaches us the basic mistakes that we do in our everyday lives.

fight-club-quotes-we-buy-things-we-dont-need

Now, let’s discuss a simple everyday problem which we all face : Strong desire to buy a new smart phone.

You’ve got the latest and costliest phone of the market, with all the shiny features but within a year you will plan to buy another one. Trust me, there is nothing wrong with your mind, this is how our brains are dictated (by the society) to work. This “strong desire” is cultivated or rather implanted in our brains by the business minds of our world. It is like the idea of fashion and music. The fashion to sport a particular style of clothes along with listening to a particular style of music has been the default identity of a period of time or an era. For example Bell Bottom Pants and Rock Music was THE thing in the 70s – 80s. Now is the time of ever changing and fast evolving technologies, smart phones, gadgets, social media and Electronic Dance Music. While change is good, we need to understand why it is good? and How it is practically achieved? Well, we achieve this by a combination of FAILURE of something and marketing of something else as BETTER.

aec2fa02573cb911b7ad6ea82e38df11

All the above things may sound obvious but they are NOT. There is a very interesting theory behind this called as The Light Bulb Conspiracy . Following is a famous documentary related to this theory:

According to this theory, products are designed to fail after certain period of time. Same is true with technology. No matter how talented are the engineers of any era, their talent will always be dictated by the consumer based society and business men who think of nothing but profit. So whenever something great is created it will be degraded somewhat before presenting it to the world, so that the product dies slowly, making way for something new which could lead to monetary profit rather than a technical one.

A classic example is of Apple Inc. They tell the world that there is a sheer brilliance in their product but after a while they will mark the same product as ‘inferior’ and will stop supporting them, forcing their own customers (rather consumers) to buy their new, costlier products. The aura and hype of Apple Inc. is another strategic plan to keep the company’s sales up.

Similar trends can be seen with the technology. Take for example Web Development. Every year something new comes into the picture for developing “cutting edge” websites. But popular frameworks or technologies of today may vanish tomorrow, engineers will never know. This is the direct result of the “Light Bulb Conspiracy”. Because of these “profit making” strategies, Software Engineers are in constant fear of losing their jobs leading them to keep learning new technologies every now and then. The consequences paints a picture with it’s own pros and cons. On one hand there is always restlessness in the IT world but on the other hand it keeps and average IT professional’s brain alive and active.

One argument may be that new technologies provides better and efficient Software Development techniques. This is partially true. While the new programming languages / techniques / tools / frameworks do provide easy approaches of Software Development, this may also kill the jobs of the skilled IT professionals who have dealt with the ‘hard way’ of Software Development.

awesome-fight-club-quotes-if-you-feel-like-shit

The IT industry and technical companies are increasingly becoming clever in selling “old stuff in new packaging”. The biggest example is Cloud Computing. As an official article by IBM says and I quote :

It (cloud computing) was a gradual evolution that started in the 1950s with mainframe computing.

Multiple users were capable of accessing a central computer through dumb terminals, whose only function was to provide access to the mainframe. Because of the costs to buy and maintain mainframe computers, it was not practical for an organization to buy and maintain one for every employee. Nor did the typical user need the large (at the time) storage capacity and processing power that a mainframe provided. Providing shared access to a single resource was the solution that made economical sense for this sophisticated piece of technology.

This is one big HOPE that IT has given us that even if according to the Light Bulb Conspiracy a particular technology is meant to die after sometime, we can always redesign the business model and present it with some other context to the world to consume it again. The people of the world will happily consume it if an urgent necessity and “fashion to consume it” is shown to them in a proper manner. Then we too can sit back with a popcorn and enjoy the show!

62454172

New improved web scraper | Python | Selenium | BeautifulSoup | PhantomJS

Improved version of A Simple Web Crawler or Web Scraper
In this version the “Browser” part is minimized and I have used PhantomJS as a headless web browser. You can also find it on gist.

# -*- coding: utf-8 -*-
'''
Created on May 27, 2016

@author: abgupta
'''
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
import time, sys, traceback
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup

class Scraper(object):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
self.url = 'https://sjobs.brassring.com/TGWebHost/home.aspx?partnerid=25667&siteid=5417'
self.base_job_url = 'https://sjobs.brassring.com/TGWebHost/jobdetails.aspx?'
self.browser = webdriver.PhantomJS(executable_path='phantomjs.exe',
desired_capabilities=webdriver.DesiredCapabilities.HTMLUNITWITHJS,
service_args=['--load-images=no']
)
self.first_page_search_opening_id = 'srchOpenLink'
self.second_page_search_btn_id = 'ctl00_MainContent_submit2'
self.next_link_id = 'yui-pg0-0-next-link'

#Spinner
def DrawSpinner(self, counter):
if counter % 4 == 0:
sys.stdout.write("/")
elif counter % 4 == 1:
sys.stdout.write("-")
elif counter % 4 == 2:
sys.stdout.write("\\")
elif counter % 4 == 3:
sys.stdout.write("|")
sys.stdout.flush()
sys.stdout.write('\b')

def first_page(self, url):
try:
self.browser.get(url)
#link = self.browser.find_element_by_link_text('Search openings')
link = self.browser.find_element_by_id(self.first_page_search_opening_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_id(self.second_page_search_btn_id))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_search_button(self):
try:
#Click search button
link = self.browser.find_element_by_id(self.second_page_search_btn_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_next_button(self):
try:
#Click NEXT
link = self.browser.find_element_by_id(self.next_link_id)
link.click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
except Exception as e:
print 'exception= ', str(e)
print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def get_page_source(self):
page_source = self.browser.page_source.decode('utf8')
#         f = open('myhtml.html','a')
#         f.write(page_source)
#         f.close()
return page_source

def get_job_info(self, new_browser, job_url):
try:
new_browser.get(job_url)
html = new_browser.page_source
soup = BeautifulSoup(html, 'html.parser')

find_error = soup.find('div', attrs={'id' : 'errorid'})
if find_error:
return 1
#Find designation
data = soup.find('span', attrs={'id' : 'Designation'})
if data:
#print data.text
f = open('jobs.txt','a')
f.write(data.text + ' :: ')
f.close()
else:
pass

#Find Qualifications
data_ql = soup.find('span', attrs={'id' : 'Qualification'})
if data_ql:
#print data_ql.text
f = open('jobs.txt','a')
f.write(data_ql.text + ' :: ')
f.close()
else:
pass

#Find Removal Date
data_rd = soup.find('span', attrs={'id' : 'Removal Date'})
if data_rd:
#print data_ql.text
f = open('jobs.txt','a')
f.write(data_rd.text + '\n')
f.close()
else:
pass
return 0
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
return 1
def get_jobs(self):
try:
h = HTMLParser()
html = h.unescape(self.browser.page_source).encode('utf-8').decode('ascii', 'ignore')
soup = BeautifulSoup(html, 'html.parser')
data = soup.findAll('a', id=lambda x: x and x.startswith('popup'))
counter = 0
for a in data:
if a.has_attr('href'):
counter = counter + 1
#self.DrawSpinner(counter)
try:
return_code = self.get_job_info(self.browser, self.base_job_url + a['href'].split('?')[1])
if return_code == 1:
#In case the error pages starts to come
return
except Exception:
continue
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

def click_pages(self, actual_page_number):
for i in range(10, actual_page_number + 1, 4):
try:
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(actual_page_number)))[0].click()
print 'Page number ff', str(actual_page_number), 'clicked'
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('t_full'))
return
except Exception as e:
try:
#If the actual page number is not on the screen
print 'page', str(actual_page_number), 'not found. i==', str(i), str(e)
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(i + 4)))[0].click()
# wait for the page to load
WebDriverWait(self.browser, timeout=100).until(
lambda x: x.find_element_by_class_name('yui-pg-pages'))
except:
continue

def main(self):
self.first_page(self.url)
self.click_search_button()
self.get_jobs()
try:
for i in xrange(2, int(5274/50) + 1):
try:
self.first_page(self.url)
self.click_search_button()
except Exception as e:
print 'exception= ', str(e)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
print 'Starting iteration again...'
#Ignore this iteration and start this iteration again
i = i - 1
continue
if i > 10:
#Click page 10
self.browser.find_elements_by_css_selector('a[page="{}"]'.format('10'))[0].click()
self.click_pages(i)
elif i != 1:
self.browser.find_elements_by_css_selector('a[page="{}"]'.format(str(i)))[0].click()
print 'Page number', str(i), 'clicked'
self.get_jobs()
#                 for _ in range(1, i+1):
#                     self.click_next_button()
except Exception as ex:
print 'exception= ', str(ex)
#print 'stacktrace= ', traceback.print_exc()
print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

if __name__ == '__main__':
start_time = time.time()
sys.stdout.flush()
sys.stdout.write('\b')
Scraper().main()
sys.stdout.flush()
sys.stdout.write('\b')
end_time = time.time()
print 'Processing Time = ',  str(end_time-start_time)

A Simple Web Crawler or Web Scraper

 

webscraping

A web crawler (also known in other terms like ants, automatic indexers, bots, web spiders, web robots or web scutters) is an automated program, or script, that methodically scans or “crawls” through web pages to create an index of the data it is set to look for. This process is called Web crawling or spidering.

There are various uses for web crawlers, but essentially a web crawler is used to collect/mine data from the Internet. Most search engines use it as a means of providing up-to-date data and to find what’s new on the Internet. Analytics companies and market researchers use web crawlers to determine customer and market trends in a given geography. (source)

I wrote a simple web crawler for a particular site for the purpose of data mining in Python. I used Selenium and BeautifulSoup4 for this purpose, probably the best combination in this business.

Following is the complete code, you can also find it on gist. The best part about this code is that it is fail safe and the crawler won’t stop even if it encounters an error. A true crawler in a way.

# -*- coding: utf-8 -*-
'''
Created on May 27, 2016

@author: abgupta
'''
from selenium.webdriver import Firefox
from selenium.webdriver.support.ui import WebDriverWait
import time, sys, traceback
from HTMLParser import HTMLParser
from bs4 import BeautifulSoup

class Scraper(object):
    '''
    classdocs
    '''
    def __init__(self):
        '''
        Constructor
        '''
        self.url = 'https://sjobs.brassring.com/TGWebHost/home.aspx?partnerid=25667&siteid=5417'
        self.base_job_url = 'https://sjobs.brassring.com/TGWebHost/jobdetails.aspx?'
        self.browser = Firefox()
        self.first_page_search_opening_id = 'srchOpenLink'
        self.second_page_search_btn_id = 'ctl00_MainContent_submit2'
        self.next_link_id = 'yui-pg0-0-next-link'

    #Spinner
    def DrawSpinner(self, counter):
        if counter % 4 == 0:
            sys.stdout.write("/")
        elif counter % 4 == 1:
            sys.stdout.write("-")
        elif counter % 4 == 2:
            sys.stdout.write("\\")
        elif counter % 4 == 3:
            sys.stdout.write("|")
        sys.stdout.flush()
        sys.stdout.write('\b')

    def first_page(self, url):
        try:
            self.browser.get(url)
            #link = self.browser.find_element_by_link_text('Search openings')
            link = self.browser.find_element_by_id(self.first_page_search_opening_id)
            link.click()
            # wait for the page to load
            WebDriverWait(self.browser, timeout=100).until(
                lambda x: x.find_element_by_id(self.second_page_search_btn_id))
        except Exception as e:
            print 'exception= ', str(e)
            print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def click_search_button(self):
        #Click search button
        link = self.browser.find_element_by_id(self.second_page_search_btn_id)
        link.click()
        # wait for the page to load
        WebDriverWait(self.browser, timeout=100).until(
            lambda x: x.find_element_by_class_name('t_full'))

    def click_next_button(self):
        #Click NEXT
        link = self.browser.find_element_by_id(self.next_link_id)
        link.click()
        # wait for the page to load
        WebDriverWait(self.browser, timeout=100).until(
            lambda x: x.find_element_by_class_name('t_full'))

    def get_page_source(self):
        page_source = self.browser.page_source.decode('utf8')
        f = open('myhtml.html','a')
        f.write(page_source)
        f.close()
        return page_source

    def get_job_info(self, new_browser, job_url):
        try:
            new_browser.get(job_url)
            html = new_browser.page_source
            soup = BeautifulSoup(html, 'html.parser')

            #Find designation
            data = soup.find('span', attrs={'id' : 'Designation'})
            if data:
                #print data.text
                f = open('descriptions.txt','a')
                f.write(data.text + '\n')
                f.close()
            else:
                pass

            #Find Qualifications
            data_ql = soup.find('span', attrs={'id' : 'Qualification'})
            if data_ql:
                #print data_ql.text
                f = open('descriptions.txt','a')
                f.write(data_ql.text + '\n')
                f.close()
            else:
                pass
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def get_jobs(self):
        try:
            h = HTMLParser()
            html = h.unescape(self.browser.page_source).encode('utf-8').decode('ascii', 'ignore')
            soup = BeautifulSoup(html, 'html.parser')
            data = soup.findAll('a', id=lambda x: x and x.startswith('popup'))
            #print data
            counter = 0
            for a in data:
                if a.has_attr('href'):
                    counter = counter + 1
                    self.DrawSpinner(counter)
                    try:
                        self.get_job_info(self.browser, self.base_job_url + a['href'].split('?')[1])
                    except Exception:
                        continue
            print counter
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)

    def main(self):
        self.first_page(self.url)
        self.click_search_button()
        try:
            for i in range(1, int(5309/50) + 1):
                self.get_jobs()
                f = open('myhtml.html','a')
                f.write(self.browser.page_source)
                f.close()
                self.first_page(self.url)
                self.click_search_button()
                for _ in range(1, i+1):
                    self.click_next_button()
        except Exception as e:
            print 'exception= ', str(e)
            #print 'stacktrace= ', traceback.print_exc()
            print 'Line Number= ' + str(sys.exc_traceback.tb_lineno)
if __name__ == '__main__':
    start_time = time.time()
    sys.stdout.flush()
    sys.stdout.write('\b')
    Scraper().main()
    sys.stdout.flush()
    sys.stdout.write('\b')
    end_time = time.time()
    print 'Processing Time = ',  str(end_time-start_time)

Python | A simple prototype for tracking human movement

At times social networking sites, job portals and even the government need to keep a tab over the movement of the people from one place to another (or one nation to another). Governments do this analysis on a regular basis to check the rate of urbanization.

Assumptions: Let’s take the example of a Social Networking site. Here I have assumed that we have the following table (In Hadoop or any other Big Data framework of your choice) having the data of people moved from one place to another.

id                                           int #Autogenerated ID
uid                                         int #User ID
some_other_id                  int
cur_location                       int  #Location ID mapped to “Location”  table

So everytime an user updates his/her location we do an INSERT into the database rather than an UPDATE. This way we have the record of the user’s movement in chronological order which can be retrieved using his “User ID”.

Architecture:

Retrieve data [Hadoop Layer using Hive]

Process it into meaningful data [Python]

Store results [MySQL] (So that it is easy to use for displaying purposes)

Python Mining Engine:

Here is a simple python program I wrote for this simple application. In actual real world scenarios you might have to use or invent some complex data mining algorithms. For the purposes of this blog post I successfully used the python script for processing 35,000 records successfully. Here is the complete code (you can also check it out here ):

__author__ = 'Abhay Gupta'
__version__ = 0.1

import pyhs2
import time
import MySQLdb
import datetime
import sys
import pytz

def DATE_TIME():
    return datetime.datetime.now(pytz.timezone('Asia/Calcutta'))

def FORMATTED_TIME():
    return datetime.datetime.strptime(str(DATE_TIME()).split('.')[0], '%Y-%m-%d %H:%M:%S')

#Spinner
def DrawSpinner(counter):
    if counter % 4 == 0:
        sys.stdout.write("/")
    elif counter % 4 == 1:
        sys.stdout.write("-")
    elif counter % 4 == 2:
        sys.stdout.write("\\")
    elif counter % 4 == 3:
        sys.stdout.write("|")
    sys.stdout.flush()
    sys.stdout.write('\b')

#Generator
def neighourhood(iterable):
    iterator = iter(iterable)
    prev = None
    item = iterator.next()  # throws StopIteration if empty.
    for next in iterator:
        yield (prev,item,next)
        prev = item
        item = next
    yield (prev,item,None)

#hive cursor
def get_cursor():
        conn = pyhs2.connect(host='',
               port=10000,
               authMechanism="PLAIN",
               user='hadoop',
               password='',
               database='test')
        return conn.cursor()

def get_mysql_cursor():
        conn = MySQLdb.connect(user='db', passwd='',
                              host='',
                              db='test')
        return conn.cursor()

def get_records():
        cur = get_cursor()
        cur.execute("select * from user_location_history")
        #Fetch table results
        return cur.fetchall()

def get_user_movement():
        #Initializing
        location_dict = {}
        #Fetching all the records
        records = get_records()
        counter = 0
        for record in records:
                counter = counter + 1
                DrawSpinner(counter)
                if location_dict.has_key(record[1]):
                        location_dict[record[1]].append(int(record[3]))
                else:
                        location_dict[record[1]] = [int(record[3])]
        return location_dict

#For performance improvements use list instead of dictionary as we don't need count of the users here
def prepare_movement_data():
        #Initializing
        city_movement_data_dict = {}
        user_location_dict = get_user_movement()
counter = 0
        for user_id, user_movement_path in user_location_dict.iteritems():
                counter = counter + 1
                DrawSpinner(counter)
                if len(set(user_movement_path)) > 1:
                        if city_movement_data_dict.has_key(tuple(user_movement_path)):
                                city_movement_data_dict[tuple(user_movement_path)] = city_movement_data_dict[tuple(user_movement_path)] + 1
                        else:
                                city_movement_data_dict[tuple(user_movement_path)] = 1
        return city_movement_data_dict

def store_mining_results(unique_movement_map_tuple):
        sql_query = None
        insert_flag = False
        update_flag = False
        cur = get_mysql_cursor()
        for prev, current, next in neighourhood(unique_movement_map_tuple):
                #Execute query
                cur.execute('select * from user_flow_location')
                #Handling the empty table case
                fetched_data = cur.fetchall()
                if len(fetched_data) == 0 and prev is not None and current is not None and prev != current and current != next:
                        sql_query = 'insert into user_flow_location(location_from, location_to, count)\
                                                values('+  str(prev) + ',' + str(current) + ', 1 )'
                        cur.execute(sql_query)
                else:
                        insert_flag = False
                        update_flag = False
                        for record in fetched_data:
                                if record[2] == prev and record[3] == current:
                                        update_flag = True
                                        sql_query = 'update user_flow_location set count = ' + str(record[3] + 1) +\
                                        ' where id = ' + str(record[0])
                                        cur.execute(sql_query)
                                elif prev is not None and current is not None and prev != current and current != next:
                                        insert_flag = True
                if update_flag == False and insert_flag == True:
                        #Insert only if the entry doesn't exists
                        #A quick fix. It can be improved later.
                        cur2 = get_mysql_cursor()
                        cur2.execute('select * from user_flow_location where location_from = ' +\
                                         str(prev) + ' and location_to = ' + str(current))
                        if len(cur2.fetchall()) == 0:
                                sql_query = 'insert into user_flow_location\
                                                (location_from, location_to, count)\
                                                values('+  str(prev) + ',' + str(current)+', 1)'
                                cur.execute(sql_query)
                        cur2.close()
        cur.close()

if __name__ == '__main__':
        start_time = time.time()
        #spinner = spinning_cursor()
        print 'Preparing data...'
        city_movement_data_dict = prepare_movement_data()
        sys.stdout.flush()
        sys.stdout.write('\b')
        print '\nData prepared for mining.'
        print 'Processing data and storing results...'
        counter = 0
        for unique_movement_map, unique_user_count in city_movement_data_dict.iteritems():
                #print str(unique_movement_map), ' : ', str(unique_user_count), ' user(s) relocated through this path'
                store_mining_results(unique_movement_map)
                counter = counter + 1
                DrawSpinner(counter)
        sys.stdout.flush()
        sys.stdout.write('\b')
        end_time = time.time()
        print '\nData mining complete!'
        print '\nTotal execution time = ', str(end_time - start_time), ' seconds\n'

Tamasha

 

Tamasha-Movie-Collections.jpg
– Was the story really different?

YES and NO. Definitely not the best of the story and in fact many stories merged in one.

– What Imtiaz Ali​ tried to do?
A theatrical attempt to portray wonderful chemistry of love, attraction and lesson of ‘dare to follow your passion’. A (same old) story of a regular guy breaking the shackles of, well, being a ‘regular guy’ but this time due to the belief of a girl (this is new!).

– Acting?
Marvellous. Rest we all know. Deepika and Ranbir did a marvellous job as always.

-Music?
AR Rahman. Pointless to say anything.

-What went right?
Wonderful locations. Great acting. That’s it! We understood what Imtiaz tried to do but failed from a margin. Still hats off to him as I personally liked it (for being different and better than PRDP at least 😛  )!

– What went wrong?
A cluster of too many events and the same old ‘follow your passion’ lesson. Irritating short flashbacks and confused ‘frustrated Ranbir’ character (even Ranbir’s good acting couldn’t hide it).

Overall a good watch for a generation who can help PRDP cross Rs400 crore mark (Sorry Bhai ke fans) and have the guts to say  “I like Honey Chingg, Ranvir Chingg etc etc”.

 

How to import all the files from a directory using Python

python-logo

 

 

 

STEP 1 : Execute the following command in the folder from which you need to import files from:

$ touch __init__.py

Or simply create a blank file __init__.py in the folder.

STEP2 : Prepare your __init__.py. The idea is to enable __init__ to know about all the files of your directory.

Paste the following code in __init__.py

import os
modules = []
file_list = os.listdir(os.path.dirname(__file__))
for files in file_list:
    mod_name, file_ext = os.path.splitext(os.path.split(files)[-1])
    if file_ext.lower() == ‘.py’:
        if mod_name != ‘__init__’:
            modules.append(files.split(“.”)[0])

__all__ = modules

STEP 3:  Import in the desired file present on some other location

Include the following code on top your file:

import os
dir_name = ‘foo.bar’
file_list = os.listdir(dir_name)
for files in file_list:
    mod_name, file_ext = os.path.splitext(os.path.split(files)[-1])
    if file_ext.lower() == ‘.py’ and  mod_name != ‘__init__’:
        exec “from {0} import {1}”.format(dir_name + files.split(“.”)[0], files.split(“.”)[0])