Raspberry Pi Tutorials : how to assign static IP and change hostname

Making a Raspberry Pi cluster requires an initial setup to be performed. Here I assume that the OS that is running is Raspbian. So once the cluster is built generally one Raspberry pi designated as the  master node controls the functionality of all the other nodes. So this post deals with how the master node connects to or other nodes ,how you can use the master node to login to any other node in the cluster, how to assign static IP address and how to differentiate each of the nodes from the other by assigning different hostnames.

Assigning static IP address :

The first thing that has to be done is assign static IP address to each PI . This helps to connect to other PIs(nodes) via SSH (Secure Shell) . So first use the command :

ifconfig

This lists all the IP addresses. My result of running the command ifconfig was :

inet addr: 192.168.3.2 bcast addr: 192.168.3.255 mask:255.255.255.0

Make a note of these addresses. So, from the above result, the inet address indicates the IP address of the node. What we do now is assign addresses to each node. The first step in it is to decide a range of IPs . I had 5 nodes and so I decided to take the range 192.168.3.215 to 192.168.3.219 . Once the range is fixed login to one of the PIs.
Also, make a note of the gateway address by typing the following command

sudo route -nee

Gateway: 192.168.3. 1

The final step is to modify the interface file :

sudo nano /etc/network/interfaces

Remove the line that reads

iface eth0 inet dhcp

Add the following:

iface eth0 inet static
address 192.168.3.215 #change according to your range
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.3.255
gateway 192.168.3.1

save by pressing CTRL+X and then type Y to save and exit. then reboot the PI using :

sudo reboot

Type the following command to ping your gateway address or router and will return :

ping 192.168.3.1  -c3

The response should look somewhat like this :

64 bytes from 192.168.1.254: icmp_req=1 ttl=255 time=2.18
64 bytes from 192.168.1.254: icmp_req=2 ttl=255 time=2.43
64 bytes from 192.168.1.254: icmp_req=3 ttl=255 time=3.24

 

Learn  how to make your Bench Automation Raspberry Pi computer now!!!

 
One last thing that needs to be modified is the /etc/resolv.conf file. This file contains information of DNS name resolvers that allow your raspberry pi to resolve names to IP addresses. For example, if you ping http://sourcedexter.com, the raspberry pi will have to determine the IP address of my python tutorials website,

Enter the following command to edit the resolv.conf file:

sudo nano /etc/resolv.conf

Enter the follow Google public DNServer IP address:

nameserver 8.8.8.8
nameserver 8.8.4.4

Press CTRL-X to exit but remember to save the file by accepting the changes.

Now type ifconfig and the new IP will be the one you would have assigned.Repeat this for all the other nodes on the cluster individually.

 

Change the hostname manually

If SSH is not installed then the hostname can be changed manually by logging into each of the raspberry pi individually and changing the hostname file as :

sudo nano /etc/hostname

By default the content of the file is :

pi

change it to what you want. for example, if the new hostname should be client003 , then delete the existing name and type the new name :
client003
press CTRL+X followed by Y to save and exit. So from the next time you login the new hostname will be seen.

April2516-25off-sitewide300X250

 

SOURCE :   http://www.suntimebox.com/ ,  http://www.southampton.ac.uk/~sjc/raspberrypi/

Advertisements

How to encrypt data in Linux using gpg and open SSL

1 Introduction

Encryption is the process of encoding messages or information in such a way that only authorized parties can read them. With almost no privacy in this digital generation of our’s, encryption of our data is one of the most required tools. Most of the applications like Gmail encrypt our data, but the data on your system is still unsecured and there are hackers or unauthorised users waiting to access them.

When you are building many other applications using various Linux based devices, you still have to take care of the data on it. One way to minimize the risk of data theft is to encrypt the data that is present even on our local system.

Now there is another very interesting topic called steganography. This involves hiding data. The key difference between encryption and steganography is that, in encryption, you know there is data but you cannot make any sense out of it. And decryption is the only way you can read the actual data. However, in steganography, you won’t even know that the data actually exists. it’s hidden in plain sight. You can easily build your own steganography tool in python.

This tutorial demonstrates several methods of encrypting the data on Linux systems using command line tools.

2 Encryption using GPG

2.1 GPG Introduction

GPG stands for GNU Private Guard which is a command line utility that is used to encrypt and decrypt data files or folders using either symmetric or public key encryption. GPG is a GPL Licensed alternative to the PGP cryptographic software suite. GPG is used by OpenPGP-compliant systems as well.

2.2 Encryption using Symmetric Key

Here I have a file named “test.txt” that I will encrypt and then decrypt with a symmetric key and print the decrypted text into another file called “output.txt”.

Run the following command to encrypt the file test.txt using a symmetric key. The option “-c” indicated the GPG to use symmetric keys.

gpg -c test.txt

The result of this will look like the image below. The first time when GPG is run, a .gnupg folder is created. It contains the files that are necessary for the encryption process. It then asks you to enter a passphrase twice. Please make sure that you enter a strong passphrase and that you remember it as you need this in future to decrypt your files.

So, once the passphrase is entered correctly, a file called “test.txt.gpg” is created. This is the encrypted file. The following image shows the file before and after encryption.You can see that the encrypted text is in an unreadable format.

Use the following command to decrypt the encrypted file

gpg -o output.txt test.txt.gpg

You will be prompted to enter the passphrase used to encrypt. Once you enter that correctly, “output.txt” file will be created with the same contents as that of “test.txt”. The output of decryption might look similar to the image below:

2.3 Public Key Encryption

Here we will encrypt a set of files using the public / private key encryption mechanism of GPG. It involves creation of a private key which should never be shared with anyone and a public key that has to be shared with the people who want to send you encrypted data. Public key cryptography is a very interesting topic. You can read about it in detail here.

First, we will have to pack the files into a compressed folder. Here I have a directory called “enctest”with three files test1.txt to test3.txt .We will compress this directory tar.gz file. I wll use the following command to create the compressed tar.gz archive:

tar czf files.tar.gz ~/enctest

This creates a file “files.tar.gz” . We now have to generate the public/private key pair. Run the following command to generate the key:

gpg –gen-key

Remember, this has to be done only once and any number of files and folders can be encrypted with this key. Once you type this command, various set of questions will be asked. The questions will be:

  • What kind of encryption to use? I selected 1 which is RSA and RSA.
  • What should be the key size? I chose 2048, you can choose any size in the range of 1024 and 4096.
  • When should the Key expire? I selected 0 , which means that the key never expires. But can provide days, weeks or years if you want it to expire in a particular time.

Other things like passphrase will be asked, you will be prompted to enter it twice. Make sure you use a strong one and that you remember the passphrase. Also your credentials will be used. The credentials that I have used here (provided below) are just for testing. It is recommended that you use your genuine credentials like name, email ID and provide some comment.

The following content shows my answer and how the output will be:

gpg (GnuPG) 1.4.16; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 2048
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"

Real name: John Doe
Email address: johndoe@somemail.com
Comment: tis is key generation
You selected this USER-ID:
    "John Doe (tis is key generation) <johndoe@somemail.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? o
You need a Passphrase to protect your secret key.

Once you enter the passphrase, it begins to generate the key. It will ask you to do some work. It is recommended to move the mouse or type something or use the drives to open some files. It will use this work to generate random bits. You may have to do this multiple times. The output for me is shown below:

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 187 more bytes)
+++++
...+++++
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 92 more bytes)
.....+++++

Not enough random bytes available.  Please do some other work to give
the OS a chance to collect more entropy! (Need 114 more bytes)

+++++

Once done, the key has been generated. It will look similar to the content below:

gpg: /home/akshay/.gnupg/trustdb.gpg: trustdb created
gpg: key FA2314B6 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/FA2314B6 2015-04-02
      Key fingerprint = 5A02 5D77 3E0A 8B69 8086  3032 DE51 6EA5 FA23 14B6
uid                  John Doe (tis is key generation) <johndoe@somemail.com>
sub   2048R/6F78E642 2015-04-02

There are two important things here: provide a strong passphrase and make sure to remember your passphrase

Now that the keys are generated, we will now have to export the public key file for importing it on other systems or to send it by email. To start the export, use the following command:

gpg –armor –output file-enc-pubkey.txt –export ‘John Doe’

Replace ‘John Doe’ with the name you used while generating the key.

It is also recommended to take a backup of the private key. We can use gpg to do that. To take the backup, use the following command:

gpg –armor –output file-enc-privkey.asc –export-secret-keys ‘John Doe’

Here the file “file-enc-privkey.asc” will hold the backup of the private key safely. Once exporting and key backup is complete, we can now encrypt and decrypt the .tar.gz file. Use the following command to encrypt:

gpg –encrypt –recipient ‘John Doe’ files.tar.gz

Remember to change ‘John Doe’ in the above command to the name given by you during key generation, else the encryption will fail. When the command runs successfully, an encrypted file called “files.tar.gz.gpg” will be created.

Now we can decrypt the tar.gz archive using the following command. It will use the private key along with the passphrase to decrypt and provide the decrypted folder. Use the following command to decrypt:

gpg –output output.tar.gz –decrypt files.tar.gz.gpg

The above command will ask for the passphrase and then decrypt the encrypted file and create a compressed file named “output.tar.gz” which can then be extracted to folder with tar to get back the files. The following image shows the output of encrypting and decrypting commands:

2.4 Why GPG?

GPG supports both: public key encryption and symmetric encryption and this provides a good amount of flexibility and can be used for a wide range of applications. There isn’t a need to provide any kind of sensitive information and also gpg can have any number of encryptors by using the public key. Choice is given to the user to select from multiple encryption algorithms. These reasons make it a very useful security tool to for encryption of files and folders or data.

Interested in Security, there is no better way to learn than knowing how to breach security:  Learn from experts on ethical hacking and how people and data can be manipulated to gain unauthorised access to private data :

Ethical Hacking By experts

3 Encryption using OpenSSL

3.1 Introduction to OpenSSL

The OpenSSL project is a collaborative effort to develop a robust, commercial-grade, full-featured and Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS) protocols as well as a full-strength general purpose cryptographic library. OpenSSL is available for most Unix-like operating systems and it is based on SSLeay. OpenSSL supports also many SSH, SFTP, and SCP applications. Here we use OpenSSL to encrypt data by making use the asymmetric encryption and the AES cipher. Symmetric encryption can be used for encrypting bigger files or data.

3.2 Generating the Public and Private keys

The first thing we have to do is generate the public and private keys.We first generate the private key. To do so, use the following command:

openssl genrsa -out private_key.pem 1024

The above command instructs OpenSSL to use RSA to generate a private key with a size of 1024 bytes. The key is then stored securely within a file called “private_key.pem”. The output of this command will look similar to the image below:

Once the private (the secret) key is generated, we can use that to generate the public key so that they form a pair. Use the following command to generate the public key:

openssl rsa -in private_key.pem -out public_key.pem -outform PEM -pubout

It will look like the image below:

3.3 Encrypting data

We can now use the public key to encrypt data. Here we will encrypt the file “test.txt” and store the encrypted text in the file encrypt.dat. Execute the following command:

openssl rsautl -encrypt -inkey public_key.pem -pubin -in encrypt.txt -out encrypt.dat

The following images show the text file before and after encryption:

3.4 Decrypting data

Here we use the private key to decrypt the file. Run the following command:

openssl rsautl -decrypt -inkey private_key.pem -in encrypt.dat -out decrypt.txt

The file decrypt.txt will contain the decrypted data. The execution of the above command and also the file content is shown in the image below:

Source : My article from HowtoForge

Django, python web-framework installation

Django is a Python web framework. It helps you rapidly build high performance and efficient web applications. It’s very much liked by the developer community because of some of its amazing features like template system , URL design , etc.  Django supports both Python 2.7.x and Python 3.x . Some of the famous web applications  built using Django are :

  • Instagram – A photo sharing app for android and IOS.
  • Matplotlib – A powerful python 2D plotting library.
  • Pinterest – a virtual pin board to share things you find on the web
  • Mozilla –  creators of firefox ,browser and OS.

And many many more. This encouraged me to start learning Django and try building my own   web application. But when I started searching for resources . I found it difficult as a beginner to find and install what I needed to get it up and running.

I was confused because of the verity of choices I that was there to install and set it up. But after a lot of searching and experimenting,  I found one straightforward method , which is good enough for a beginner.

It’s important you know how to program in python so that it helps you to build awesome applications quickly. There are a lot of amazing free courses to learn python that you can make use of.

So, Here are the steps to install it.

1 Installing Python

The first step is to install python. Generally most linux OS have python 2.7 installed by default. To check if it exists, use the following command:


python --version

you may get an output similar to

Python 2.7.6

or any other version installed. If not, then, it can be downloaded from HERE.

2 Installing a database system (SQLite)

Since most of the web applications need a database and querying has to be done upon it, it’s better to have a database setup on your system. Django provides the usage of database engines like PostgreSQL, MySQL, SQLite, Oracle. Its very simple to learn using a DB with python. Knowing this  gives you an added advantage in landing your next high paying job.

SQLite is a database we can use, it is a light weight database and its good enough to begin with. For any simple web applications that you develop, you can use SQLite itself and later upgrade it to suit your needs. So, to install SQLite, use the following command:


sudo apt-get install sqlite

Please do note that in some linux systems SQLite is preinstalled along with python, in such cases, the above command can be ignored.

3 Installing pip and easy_install

Any previous versions of Django if existing has to be removed. But if you have pip or easy_install for installation then you don’t have to worry about removing the previous versions because the pip or easy_install will do it for you. So, install both of them by using the commands:


sudo apt-get install python-setuptools

The above command installs the required python setup tools along with easy_install. Most of the cases, “pip” is preinstalled. If in any case it isn’t, install pip as given in the official documentations HERE.

Before proceeding, confirm that python, SQLite, pip and easy_install has been installed. To do so, use the commands one after another given in the image below and the output of each command should be similar(not same) as shown in the image below.

4 Installing a virtual environment

In this step, we install a “Virtual Environment.” After a lot of searching and testing, I found that Django can be run very easily on a virtual environment. A virtual environment is created to encapsulate all the data and resources required to run Django at one place so that all the changes made remain in that environment itself. Another important benefit of the virtual environment is that it supports the light weight web server provided by Django by default. This allows the installation and integration of apache server to be avoided.

One of the easiest way to install virtual environment on linux is by using the “easy_install” command. This script comes with a package called python-setuptools which we have installed in a previous step. So now, we can install the environment using the following command:


sudo easy_install virtualenv

Be patient, as it may take some time depending on the speed of the internet. When finished, the terminal output should be similar to the image below.

5 Creating and setting up the virtual environment

Now we create a folder using virtualenv so that the folder can act as the virtual environment to contain Django. Type the following command in the terminal:

virtualenv --no-site-packages django-user

Here django-user is the folder that will be created and used as the environment. It will be created under the directory you are currently in. Now to start the environment use the command:


source django-user/bin/activate

Now if you see your folder name

(django-user)

at the beginning of the prompt , it means that the environment is started. Refer to the image below.

Navigate to the folder django-user using the command.


cd django-user

Upon listing the items in the folder using the “ls” command, you will be able to see directories like bin, lib, include, local. So what this virtual environment does is that any command or operation performed in the environment will not affect anything outside the environment. So the changes are isolated and this allows us to easily create as many environments as we want and test many things very easily.
<h2 id=”-installing-the-django-framework”>6 Installing the Django framework</h2>
The final step is installing Django within this environment that we have created in the previous step. Remember that you still have to be in the virtual environment in the django-user folder else django will be installed outside the environmant and cannot be used. To install Django use the command:

easy_install django

As a reference, view the following image. Note that the beginning of the prompt says (django-user) which means that you are currently in the virtual environment and before installing django, you should be within the “django-user” directory. This is very important.

Thats it! Django is installed on your system with all required functionality for beginners to develop and learn the framework. Now you can go ahead and try out the DJANGO tutorial to learn the different functionalities and run your first web app. You can find the tutorial in the official Django documentation HERE.

Learn from the best in the market, Use this offer to get all courses for $10 on Udemy.

 

Find synonyms and hyponyms using Python nltk and WordNet​

What are Wordnet, Hyponyms, and synonyms?

Wordnet is a large collection of words and vocabulary from the English language that are related to each other and  are grouped in some way. That’s the reason WordNet is also called a lexical database.

WordNet groups nouns, adjectives, verbs which are similar and calls them synsets or synonyms. A group of synsets might belong to some other synset. For example, the synsets “Brick”   and “concrete” belong to the synset “Construction Materials” or the synset “Brick” also belongs to another synset called “brickwork ” . In the example given, brick and concrete are called hyponyms of  synset construction materials and also the synsets  construction material and brickwork are called synonyms.

You can imagine wordnet as a tree, where synonyms are nodes on the same level and hyponyms are nodes lower than the current node.

What is nltk ?

Natural Language Toolkit (NLTK)  is a python library to process human language. Not only does it have various features to help in natural language processing, it also comes with a lot of data and corpus that can be used. Wordnet is one such corpus provided by nltk data.

How to install nltk and Wordnet  ?

To install nltk on Linux and Mac, just run the following command :


sudo pip install nltk

For full installation details and installation on other platforms visit their official installation page.

Once nltk is downloaded, you can download wordnet using the nltk data interface. Follow the instructions given here.

How do you find all the synonyms and hyponyms of a given word ?

We can use the  downloaded data along with nltk API to fetch the synonyms of a given word directly. To fetch all the hyponyms of a word, we would have to recursively  navigate to each node  and its synonyms in the wordnet hierarchy.  Here is a python script to do that.

  • Get all synonyms or Thesaurus  for a given word

    from nltk.corpus import wordnet as wn
    input_word = raw_input("Enter word to get different meanings: ")
    
    for i,j in enumerate(wn.synsets(input_word)):
    print "Meaning",i, "NLTK ID:", j.name()
    print "Definition:",j.definition()
    print
    
    

    Following example finds the synoyms/ synsets for the word car:

    wordnetPic1

  • Get all the hyponyms and hypernyms for a given word

    
    from nltk.corpus import wordnet as wn
    from itertools import chain
    
    input_word = raw_input("Enter word to get hyponyms and hypernyms: ")
    
    for i,j in enumerate(wn.synsets('dog')):
    print "Meaning",i, "NLTK ID:", j.name()
    print "Hypernyms:", ", ".join(list(chain(*[l.lemma_names() for l in j.hypernyms()])))
    print "Hyponyms:", ", ".join(list(chain(*[l.lemma_names() for l in j.hyponyms()])))
    print
    
    

    hypernyms are nothing but synsets above a given word. Getting all the hypo and hypernyms are  also called ontology of a word. In the following example, the ontology for the  word car is extracted.

    wordnetPic2

  • Get all Hyponyms with synsetID

    each synset has an Id  which is nothing but the offset of that particular word in the list of all words. If you know the Id of a synset and want to find out the id of all the hyponyms instead of meanings and definitions, you can do this:

    
    from nltk.corpus import wordnet as wn
    
    X = []
    
    id = int(raw_input("enter synset ID: "))
    wr = wn._synset_from_pos_and_offset('n',id)
    
    def traverse(wr):
    if(len(wr.hyponyms()) ==0):
    X.append(wr.offset())
    else:
    list_hypo = wr.hyponyms()
    for each_hypo in list_hypo:
    traverse(each_hypo)
    
    traverse(wr)
    print X
    
    

    wordnetPic3

Source : StackOverflow

Search Wikipedia from Command Prompt

This tutorial shows how to quickly setup python scripts to search Wikipedia  right form the terminal.  This would be ideal for people who want to quickly read up on a topic without having to open a browser , wait for it to load and search. Terminal would do the job more quickly and efficiently.

The way to solve this is to use the Wikipedia API  to send a http request to the website as a query action in json format and get the response is back as a json Object. This could be implemented using the request  module in python.

The next step is to parse this response json object. I found that Beautiful Soup can be used to do that. This was one of the best options available . Once the parsing was complete, we only have to display the data.

I found 2 scripts to do just that. These scripts however don’t use Requests module but use urllib and urllib2.

Advantage :
The advantage of using this is that, only a brief summary of the topic under search will be showed. And most of the time it is the only thing we want.

The second thing is that if there are sections in the topic, it will be shown.

Disadvantage :
What this lacks is that, it does not have a good pattern matching for searches, that is if it doesn’t ind the exact words in the article it will not be returned. This also happens if each result has multiple result.
Sometimes the data returned is either very less or a lot.

Procedure:

First download and save these 2 python file:
wikipedia.py  : This python Program  is used to form the url for the search term to get the article from wikipedia.com. Once the URL is formed, we send a request using the urllib python library.We perform the search and get the whole data from Wikipedia.

wiki2plain.py : This program is used to convert the full   document received  from previous program to readable text format. Usually, the response from the previous program is in the form of json/html. Thus we use this program to parse the json/html and get meaningful data on the topic.

Then create a python file and name it wiki.py and paste the following script in it:


from wikipedia import *
from wiki2plain import *

lang = ‘simple’
wiki = Wikipedia(lang)
try:
    data1 = raw_input(“enter searh query: “)
    raw = wiki.article(data1)
except:
    raw = None

if raw:
    wiki2plain = Wiki2Plain(raw)
    content = wiki2plain.text
    print content
else:
    print “No text returned”

This code just calls the previously downloaded files and allows you to dynamically enter a topic and search Wikipedia.
Save it in the same folder as the other two files and run  wiki.py

Enter the search term and get the results.

Screenshots :

Screenshot from 2014-04-20 22:21:28

Screenshot from 2014-04-20 22:21:06

Hitchhiker’s guide to learning python

This blog post will be a guide to python resource, right from where you can start learning this amazing  language to finding resources to c solve complex problems in the field of computer vision, big data , Natural Language processing, etc.

My aim is to refine these post to  make it better each week, add more resources, add more information and eventually create a path for people to choose from. But, its gonna take time to reach there. Till then , I hope this continues to help you.

Please feel free to add your suggestions in the comments.

This post will be forever growing, so come back each week, to find more resources :

1) Where to begin learning Python ?

There are tons of resources out there but very few that teach you to use this language in the right way while showing you the power it has. Here is a list that has resonated well with me:

  • Head First Python  :  For absolute beginners who want to have a taste of all the things python can do , this is the right book . The advantages of learning from this book are:
    • It uses a project based approach, so you can see your progress visually with what you’ve achieved so far.
    • It dives straight to programming from the first chapter and has exercises in between chapters to help you make sure you understand concepts.
    • It covers various fields like standalone applications, web applications, mobile applications. So , you know the capability  of python and where it can be used.
    • By the end of this book, you will be able to build applications on our own with very little help

The disadvantage of this is that, it’s not for people who prefer in-depth    and in-
detail explanation of each concept.

If this is the right book for you, you can purchase it here :

  • Think Python :  ( How to think like a computer scientist ) : This is a free e-book designed for those people who like to master the core-concepts, the syntax and features available in python.   This focuses on introducing you to different programming concepts and how they can be effectively implemented.  This book gets into each aspect of programming be it recursion or inheritance, etc in much detail than the “Head first Python ” book.

There is a hard cover book(link below) , the latest edition has more in depth
explanation, resources and updated with many more examples and real use case
scenarios.

  Advantages : 

  1. Teaches in depth, the concepts and efficient programming paradigms
  2. Uses a scientific approach by providing resources to algorithms and efficient data structures implementations
  3. Provides a guide to tools and libraries for  mathematical computations, and also insights to data-analysis

You can buy the paperback  from here :

 

 

 

Handling Database Tables with no Primary key in Spring MVC and Hibernate

If you are building your web application in Java, you might be using Spring MVC with Hibernate. Both these Frameworks are extremely powerful and lets you, the developer to focus on building the application itself rather than to worry about many other factors required to manage and maintain the project dependencies.

You might have designed a database model, and might have tables without a primary key. Tables which map multiple tables (a.k.a Mapping tables) , usually do not have a primary key. While this seems normal, there are situations when you will have to insert, or update values into the table and you will find it difficult to do so. Why ?  That’s because without a key, there exists an ambiguity as to which row to update or delete. Fortunately, there is an easy way to solve this problem.

This is gonna be a long post. So, brace yourselves. I will try my best to make my point, but if you have any queries/ suggestions, feel free to let me know in the comments below. So, lets begin.

The answer to this issue is to use an Embedded Entity which provides a key to the persistent entity without a primary key. And these are the two annotations that will be required: @Embeddable and @Embedded .

Consider two persistent entities called Car.java and Color.java . These two have a Primary Key each and represent the tables “CAR” and “COLOR” tables in the “VEHICLE” database/schema.

Car.java:

package com.example.entities;

@Entity
@Table(name = “CAR”, catalog = “VEHICLE”)
public class Car implements java.io.Serializable {

private int carId;

private String carName;

@Id

@Column(name = “CAR_ID”, unique = true, nullable = false)
public int getId(){
return this.colorId;
}
public void setId(int colorId){
this.colorId = colorId;
}

public void setCarName(String carName){

this.carName = carName;
}
@Column(name = “CAR_NAME”, nullable = false)
public String getCarName(){
return this.CarName;
}

}

Color.java:

package com.example.entities;
@Entity
@Table(name = “COLOR”, catalog = “VEHICLE”)
public class Color implements java.io.Serializable{

private int colorID;
private String colorName;

@Id

@Column(name = “COLOR_ID”, unique = true, nullable = false)
public int getId(){
return this.colorId;
}
public void setId(int colorId){
this.colorId = colorId;
}

@Column(name = “COLOR_NAME”, nullable = false)

public String getColor(){
return this.colorName;
}
public void setColor(String colorName){
this.colorName = colorName;
}

}

For these two entities let there be another mapping entity called “CarColor.java”.This is the representation of the mapping table between “CAR” and “COLOR”. As this is a mapping table, it does not have any primary key. In Spring Hibernate scenario , the enity CarColor.java would be like:

CarColor.java

package com.example.entities;

@Entity
@Table(name = “CAR_COLOR”, catalog = “VEHICLE”)
public class CarColor implements java.io.Serializable{

private Car car;
private Color color;

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “CAR_NAME”, nullable = false, insertable = false, updatable = false)
public Car getCar(){
return this.car;
}
public void setCar(Car car){
this.car = car;
}

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “COLOR_NAME”, nullable = false, insertable = false, updatable = false)
public COLOR getColor(){
return this.color;
}
public void setColor(Color color){
this.color = color;
}
}

This above example is correct, but as you would have seen, No primary key is there. So, if there are scenarios where you want to identify a record uniquely, you will find yourself in a bit of trouble. But, there is a  way around it. If you have auto generated the entities, then this workaround is automatically implemented. If not, yo can do it yourself. Here is what needs to be done:

adding another entity called CarColorId.java. This class is NOT a persistent class. But this can be used to uniquely identify each record in the CarColor.java   table. Here is the new Implementation of CarColor.java and also CarColorId.java .

CarColor.java

package com.example.entities;

@Entity
@Table(name = “CAR_COLOR”, catalog = “VEHICLE”)
public class CarColor implements java.io.Serializable{

//The following variable is of the type CarColorId
private CarColorId id;
private Car car;
private Color color;

@EmbeddedId
@AttributeOverrides({
@AttributeOverride(name = “carId”, column = @Column(name = “CAR_ID”, nullable = false)),
@AttributeOverride(name = “colorId”, column = @Column(name = “COLOR_ID”, nullable = false))})
public CarColorId getId() {
return this.id;
}

public void setId(CarColorId id) {
this.id = id;
}

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “CAR_NAME”, nullable = false, insertable = false, updatable = false)
public Car getCar(){
return this.car;
}

public void setCar(Car car){
this.car = car;
}

@ManyToOne(fetch = FetchType.LAZY)
@JoinColumn(name = “COLOR_NAME”, nullable = false, insertable = false, updatable = false)
public COLOR getColor(){
return this.color;
}

public void setColor(Color color){
this.color = color;
}

}

CarColorId.java

package com.example.entities;

@Embeddable
public class CarColorId implements java.io.Serializable{

private int carId;
private int colorId;

@Column(name = “COLOR_ID”, nullable = false)
public int getColorId(){
return this.colorId;
}
public void setColorId(int colorId){
this.colorId = colorId;
}
@Column(name = “CAR_ID”, nullable = false)
public int getCarId(){
return this.colorId;
}
public void setCarId(int colorId){
this.colorId = colorId;
}

public int hashCode() {
int result = 17;

result = 37 * result + this.getCarId();
result = 37 * result + this.getColorId();
return result;
}

}

Let’s analyse the new things that have been included in the above two classes.  Firstly, you would have observed two new annotations :  @Embeddable and @EmbeddedId . Here embeddable is used to indicate an entity that isn’t a persistent class but has persistent objects within them and these persistent objects can be used to gain an identity to the persistent class to which it forms an ID. @EmbeddedId is used to represent the object of the embeddable class.

Also, there is method in carColorId called hashCode() . hashcode() are methods are the ones which  gives an integer value for a given object. So, the hashvalue of an object always remains same and a carefully created hashCode() is the one which gives a unique identity to an object of the persistent class. So, in our example, class CarColorId gives a unique Identity to our persistent class CarColor.  

To perform operations on the CarColor table, one can find and compare objects of the type CarColor by making use of the hashCode() and using that value as a value that provides uniqueness to the objects  of that class.

As an example, If you want to update an object of type CarColor, then first fetch the objects that match the criteria of car name and/or car color . Then apply the hashcode() on that object to check it matches the object that has to be updated. If it does, then perform the update operation.

So, this is it from me. Hope that you have found what you have been looking for. Feel free to comment regarding any questions or suggestions.

 

What is Access Control Allow Origin and how to use CORS in Java Spring

When a javascript client tries to  consume data from another application or some resource on a server through a REST API, the server responds with a  Access-Control-Allow-Origin  response header to tell the client that the content of this page is accessible to certain origins. The origins can be any  client that sends a request to the server to fetch some resource. The clients that are allowed to access can also be specified. But by default, clients are not allowed to fetch the resource from the server.

This Access-Control-Allow-Origin  is a Cross Origin Resource Sharing (CORS) and the CORS filter must be implemented to send a response from the server while building RESTful Web Services.  The way this works is, when a client makes a request for a resource, it sends the Origin header in the request. The server validates this origin and decides to allow the request or not. If it decides  to allow, then it responds  with the Access-Control-Allow-Origin in the header and then upon receiving this, the browser  matches the origin and allows the request. If the browser finds that the origin matches, it allows the request to be completed, else it throws an error.

Here is an example of a GET request made to a REST service and the corresponding response given by the server. Here, the Origin matches the one mentioned by the server.

Request:

GET /test/test.json  HTTP/1.1/
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) Gecko/
Firefox/3.5.5 (.NET CLR 3.5.30729)
Accept: application/json
Origin: http://examplesite.com

Response: 

HTTP/1.1 200 OK
Content-Type: Application/json;character-set=UTF-8
Date: Sun,  30 Aug 2015
Server:Apache-coyote/1.1
Access-Control-Allow-Origin:  http://examplesite.com

Thus, the request is allowed in the above case. If your server should allow requests from all origins, then you can set:
Access-Control-Allow-Origin: “*”

Here the “*” indicates all origins to be allowed to complete their request.

If you are building a REST service in spring, you can create a simple or complex CORS filter. This filter will then help your server respond to request accordingly. Following is a simple program given by the Official Spring Documentation, which allows all origins to access a resource from your server.

package hello;
import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletResponse;
import org.springframework.stereotype.Component;

@Component
public class SimpleCORSFilter implements Filter {

	public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
		HttpServletResponse response = (HttpServletResponse) res;
		response.setHeader("Access-Control-Allow-Origin", "*");
		response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE");
		response.setHeader("Access-Control-Max-Age", "3600");
		response.setHeader("Access-Control-Allow-Headers", "x-requested-with");
		chain.doFilter(req, res);
	}

	public void init(FilterConfig filterConfig) {}

	public void destroy() {}

This Program allows any kind of origin to  send  GET, POST, OPTIONS and DELETE requests  and serves them accordingly. The Access-Control-Max-Age field makes sure that the access control is alive for 1 hour or 3600 seconds.

Without setting the CORS filter, any client , be it a web front end built using AngularJS or a simple JavaScript client will not be able to fetch the data. and you might get an error thrown by he browser.

Source: Spring Documentation 1, Spring Documentation 2, StackOverFlow

Using Putty for remote GUI applications

1 Introduction

I had used putty for my amazing Raspberry Pi project. All my initial experiences have been combined into a set of points that detail on errors that occurred as well as how  we ere able to solve it.
Raspberry Pi: Make a Bench automation computer , is another awesome project to work on, if you are interested in Raspberry Pi.

Remote connections can be established with system over a network through SSH (secure shell) easily, we can login, perform actions or send commands to another system remotely trough this conection on the commandline. But what we cannot do is launch a GUI application for viewing content present in the remote node. This is the disadvantage of using ssh in a terminal.

But this disadvantage can be easily solved by making use of “putty”, a remote login application which can not only be used to login to a remote node, but also launch GUI applications. Examples of GUI applications are Browser, text viewers, etc.

This tutorial concentrates on installing and using Putty on a raspberry pi cluster running Raspbian OS and MPICH2 (message passing interface). We use putty to view some text files using the “leafpad” application and browsers like “Netsurf” and “Dillo” that are pre existing on any version of Raspbian. We will also look into saving the setting, so that the second time onwards, its just a click to load the settings.

2 Installing and launching Putty

Putty can be installed through terminal. Run the following command:

sudo apt-get install putty

Once installed, test if it has been successfully installed by running it. To run it there are two ways:

1. type the command in the terminal:

putty

2. or you can also launch through the menu, as shown in the following image:

Once you open putty, it will look like the image below.

3 Configuring Putty

Once putty is launched, we first enter the IP of the node we want to connect to in the “Host Name” field located in the session window. Here we log into the IP “192.168.3.104”. Refer to the image below to enter the IP:

Once the IP is configured, we will have to enable X11, which enables us to run GUI based applications from the remote node. To do so, follow the steps:

  • On the left panel of putty, scroll down and select the SSH option.
  • After clicking on SSH, you get many options, click on the “X11” option , which is present in the left panel.
  • Once X11 is selected, check the option that says “Enable X11 forwarding” on the right side.

Once the above steps are done, the putty window must look like the image below:

4 Connecting to the remote node

Once X11 forwarding is enabled, click on the open button present at the bottom of the putty window.This opens a connection with the remote node with the IP “192.168.3.104” and you can see a terminal. Also I have run the “ls” command just to show the files that are present in the remote node I have logged into. It looks similar to the image below

5 Opening HTML files in browser

The way in which we can open HTML files in raspberry pi remotely depends on the browser being used. Here I will show you to use Dillo and Netsurf to open an existing HTML file called “sum1.html”.

5.1 Using the Netsurf Browser

To open the file “sum1.html”, we type the following command in the terminal:

netsurf file:///home/pi/sum1.html

The following image shows the command and the Netsurf browser that has opened.

For other GUI applications and settings savings option, read my full tutorial on HwotoForge Here.

 

Installing Network Simulator 2 (NS2) on Ubuntu 14.04

1 Introduction

Network simulators are tools used to simulate discrete events in a network and which helps to predict the behaviours of a computer network. Generally the simulated networks have entities like links, switches, hubs, applications, etc. Once the simulation model is complete, it is executed to analyse the performance. Administrators can then customize the simulator to suit their needs. Network simulators typically come with support for the most popular protocols and networks in use today, such as WLAN,UDP,TCP,IP, WAN, etc.

Most simulators that are available today are based on a GUI application like the NCTUNS while some others incl. NS2 are CLI based. Simulating the network involves configuring the state elements like links, switches, hubs, terminals, etc. and also the events like packet drop rate, delivery status and so on. The most important output of the simulations are the trace files. Trace files log every packet, every event that occurred in the simulation and are used for analysis. Network simulators can also provide other tools to facilitate visual analysis of trends and potential trouble spots. Most of the simulation is performed in discrete time intervals where events that are in the queue are processed one after the other in an order.

Since simulation is a complex task, we cannot guarantee that all the simulators can provide exact or accurate results for all the different type of information. Examples of network simulators are: ns, NCTUNS, NetSim, etc.

ns2 is a name for series of discrete event network simulators like ns-1, ns-2 and ns-3. All of them are discrete-event network simulators, primarily used in research and teaching. ns2 is free software, publicly available under the GNU GPLv2 license for research, development, and use.

This post deals with the installation of “ns2” also called the “network simulator 2” in Ubuntu 14.04.

2 Download and Extract ns2

Download the all in one package for ns2 from here

The package downloaded will be named “ns-allinone-2.35.tar.gz”. Copy it to the home folder. Then in a terminal use the following two commands to extract the contents of the package.:

cd ~/
tar -xvzf ns-allinone-2.35.tar.gz

All the files will be extracted into a folder called “ns-allinone-2.35”.

3 Building the dependencies

Ns2 requires a few packages to be pre installed. It also requires the GCC- version 4.3 to work correctly. So install all of them by using the following command:

sudo apt-get install build-essential autoconf automake libxmu-dev

One of the dependencies mentioned is the compiler GCC-4.3, which is no longer available, and thus we have to install GCC-4.4 version. The version 4.4 is the oldest we can get. To do that, use the follwoing command:

sudo apt-get install gcc-4.4

The image below shows the output of executing both the above commands. If you have all the dependencies pre-installed, as I did, the output will look like the image below:

Once the installation is over , we have to make a change in the “ls.h” file. Use the following steps to make the changes:

Navigate to the folder “linkstate”, use the following command. Here it is assumed that the ns folder extracted is in the home folder of your system.

cd ~/ns-allinone-2.35/ns-2.35/linkstate

Now open the file named “ls.h” and scroll to the 137th line. In that change the word “error” to “this->error”. The image below shows the line 137 (highlighted in the image below) after making the changes to the ls.h file.To open the file use the following command:

gedit ls.h

Save that file and close it.

Now there is one more step that has to be done. We have to tell the ns which version of GCC will be used. To do so, go to your ns folder and type the following command:

Sudo gedit ns-allinone-2.34/otcl-1.13/Makefile.in

In the file, change Change CC= @CC@ to CC=gcc-4.4, as shown in the image below.

 

For installation, usage and  more, read my full story here.