Test automation with Cucumber JVM, Selenium and Mocha

Cucumber:
- A test framework which also supports BDD (Behavior Driven Development)
- Here BA / QA can write test cases in plain text following some simple conventions
- It generates .NET, Ruby, Java code stubs automatically
- It’s Java implementation is called Cucumber JVM

Selenium IDE:
- It’s a Firefox extension which allows to Record, Edit and Debug Selenium test in browser.
- It exports Selenium remote control Java client code
- It has Client and Server libraries
- In Selenium Firefox browser extension you can: Start record > Act in browser > It generates test scripts > Export test case > Copy code and paste in code stubs generated by Cucumber

Mocha:
- It is a JavaScript unit test case framework
- Running on Node.js
- Used for UI testing, Filtering, Display of messages based on events and conditions

Watch this video to know more about these and see a real demo with use case:

An ideal sprint planning for QA could be:
- BA (Business Analyst) and TA (Technical Architect) gathers requirement from client
- TA and Dev design the solution – mock-ups / flow diagrams
- BA and Dev write scenarios and pass it to QA
- QA write test cases and confirm to BA and Dev

Posted in Best Practices and Guidelines, Knowledge Sharing | Leave a comment

Big Data : Hadoop : Some important facts and terms

Big data is characterised by 3Vs I.e. Volume, Velocity and Verity.

Big data implementations are used to store read only / populate only data of high 3Vs. It is not replacement of Relational databases.

Best use cases of utilisations of big data / Hadoop implementations are-
1. As store of data generated from IoTs (Internate of Things)
2. As archived data store of parts of data from Relational Databases such as Audit Trail data, Field history, User analytics data. These data are usually generated by apps and are written in a RDBMS but are not usually edited/updated.

HIVE: it is a utility which is used to store table type data in HADOOP. Table metadata is stored and managed in MySQL internally. Actual data are kept in data nodes in HADOOP file system. HIVE scripts are almost identical to SQL. Example HIVE scripts are-
Select * from TableName
Select Count(Value) from TableName

Creating a table in Hadoop file system using HIVE script-
create table CountryTable(id int,name string)
HIVE script to load data in a HIVE created table stored in Hadoop file system-
load data local inpath '/home/hduser/country.txt' overwrite into table CountryTable;

SQOOP: Often it is referred as SQL Input Output. It is a tool used to import data from relational databases such as SQL server, MySQL, Oracle to Hadoop file system.

Posted in BIG DATA, HADOOP, Hive | Leave a comment

BIG DATA : HADOOP : I was able to setup hadoop – here are the steps

A few months before i was following http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm and it took me around 2 weeks spending nights after office before it successfully worked. And i was lost what i did in what sequence. So this weekend i tried it once again. Following are the steps i followed this time-

1. After installation of ubuntu 14.0, get update from terminal
sudo apt-get update

2. Install java
sudo apt-get install openjdk-7-jre
Following command will return the path where java and python installed in ubuntu.
which java
which python

Check version of java is installed
java -version

3. Add dedicated user for haddop
sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser

This will create new user hduser with creating home directory “/home/hduser”.
Use below command to make this user as Sudo user
sudo usermod -a -G sudo hduser

4. Installing SSH
SSH (“Secure Shell”) is a protocol for securely accessing one machine from another. Hadoop uses SSH for accessing another slaves nodes to start and manage all HDFS and MapReduce daemons.
sudo apt-get install openssh-server

5. Generate password less ssh connection
a. Run below from hduser. rsa is the algorithm for generating id
ssh-keygen -t rsa
b. After generation of finger print, pass this to localhost
cat ~/.ssh/id_rsa.pub | ssh hduser@localhost 'cat>>.ssh/authorized_keys'
c. Check for password less call to localhost
ssh localhost
This will not ask for password.

6. Go back to root user with ‘exit’ command
a. install gksudo in ubuntu. It is an editor.
sudo apt-get install gksu
b. install vim in ubuntu. It is also an editor.
sudo apt-get install vim

7. Disabling IPv6
Since Hadoop doesn’t work on IPv6, we should disable it. One of another reason is also that it has been developed and tested on IPv4 stacks. Hadoop nodes will be able to communicate if we are having IPv4 cluster. (Once you have disabled IPV6 on your machine, you need to reboot your machine in order to check its effect. In case if you don’t know how to reboot with command use sudo reboot )
For getting your IPv6 disable in your Linux machine, you need to update /etc/sysctl.conf by adding following line of codes at end of the file,
a. Open terminal and go to root user then enter gksudo gedit /etc/sysctl.conf and open the configuration file and add the following lines at the end
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

b.After that run $ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If it reports ‘1′ means you have disabled IPV6. If it reports ‘0‘ then please follow Step c and Step d.
c. Type command sudo sysctl -p you will see this in terminal.
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

d. Repeat above “Step b” and it will now report 1.

8.Copy hadoop package folder from root user home to current hduser home
scp -r hadoop hduser@localhost:/home/hduser

9.change ownership and mode on hadoop folder:
sudo chown hduser:hadoop -R /home/hduser/hadoop
sudo chmod -R 777 /home/hduser/hadoop

10.Edit bashrc file for hadoop environment variables:
vi ~/.bashrc
open in editer and shift G to go to file end.
Then paste below lines:
# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/
export PATH=$PATH:$JAVA_HOME
export HADOOP_HOME=/home/hduser/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin/:$HADOOP_CONF_DIR
export HIVE_HOME=/home/hadoop/hive
export PATH=$PATH:$HIVE_HOME/bin
# -- HADOOP ENVIRONMENT VARIABLES END -- #

11. Check all config files in hadoop/etc/hadoop such as: core-site.xml, hadoop-env.sh, hdfs-site.xml etc.

12. Create new directory hdfs in hadoop folder and two folders named data and name
mkdir hdfs
mkdir name
mkdir data

13. Reload bashrc file for updated path
source ~/.bashrc
echo $PATH

This will list updated path as per current user settings

14. Format Hadoop cluster / datanode
hadoop namenode -format

15. Start all service
start-all.sh

16. Run jps to see all service list. 6 service should be running:
10894 NameNode
11045 DataNode
11228 SecondaryNameNode
12055 Jps
11503 NodeManager
11377 ResourceManager

17. To see namenode, resourcemanager, datanode services, use:
http://localhost:50070 -namenode
http://localhost:8088/cluster -resourcemanager
http://localhost:8042/node - datanode

18. Go to root user:
sudo -i

19. Rename folder from datanode to data
gvfs-move /home/hduser/hadoop/hdfs/datanode /home/hduser/hadoop/hdfs/data

20. For newly setup hadoop folder, update core-site.xml and hdfs-site.xml
as below:
hdfs-site.xml
-------------------------

dfs.replication
3
dfs.namenode.name.dir
file:/home/hduser/hdoop/hdfs/name
dfs.datanode.data.dir
file:/home/hduser/hadoop/hdfs/data

core-site.xml
-----------------------------------

fs.defaultFS
hdfs://localhost

21. Using below HIVE COMMANDS, let’s import some data in hdfs:
create table CountryTable(id int,name string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n';

load data local inpath '/home/hduser/country.txt' overwrite into table CountryTable;

22. Some more HIVE scripts. Very similar to SQL server scripts:
Select * from CountryTable
Select Count(*) from CountryTable
Select Count(ID) from CountryTable
Select SUM(ID) from CountryTable

Posted in BIG DATA, HADOOP, Hive | Leave a comment

force.com : a true modern RAD

Force.com : a true modern RAD.

I am saying this because of reach development features of the platform. Just an example, i was able to create this simple CMS on force.com including making this available over internate in just half an hour.

Posted in Force.com, Knowledge Sharing | Leave a comment

TFS: Delete workspace of another user from command line

One of my team member left the company and some of the project files were checked out by him. Other team members were not able to checkout those files to work on. TFS Administrator console also does not provide any workaround this. I found “tf” command line utility to e useful bhere. Following command can be used to delete the user’s workspace but any changes in the file after that checkout is lost. TFS version of the file will be unaffected after deleting a TFS workspace.

tf workspace /delete WorkSpaceName;UserName
/server:http://TFSServerURL:8080/tfs/TFSInstanceName

Posted in Troubleshooting | Leave a comment

Realistic Positive Thinking

Posted in Life, My Inspirations | Leave a comment

A simple CMS built on force.com

Force.com developer org allows to create one public website and i created this one. I created this simple CMS in around one hour to publish blog post on my force.com public website.

Posted in APEX, Force.com, Salesforce | Leave a comment

Dirty Coding and Learning 1

I used standardController ‘Account’ & approx 15 VF components in this page such as:

page, pageBlock, pageBlockSection, outputLabel, outputText, pageBlockTable + column, repeat + outputLabel, dataTable + column, outputLink, form, pageBlokSectionItem, outputPanel, inputCheckBox, inputTextArea, inputSecret

I have also used: system variables on this page.

Check the example: http://pradeep-kumar-developer-edition.eu5.force.com/apex/Learn_501_1?id=00124000002uk2d

Code can be found on my force.com website: http://pradeep-kumar-developer-edition.eu5.force.com/

Posted in APEX, Force.com, Knowledge Sharing, Salesforce | Leave a comment

Display Parent and Child navigational records in a nested DataTable

In this sample i have tried to see Parent and Child navigational capability of SOQL. Taking Contact as main entity i have navigated to Account (parent) and Cases (child). The SOQL query result is binded in a nested DataTable.

Check the example: http://pradeep-kumar-developer-edition.eu5.force.com/apex/ViewParentChildSOQL

Code can be found here: http://pradeep-kumar-developer-edition.eu5.force.com/

Posted in APEX, Force.com, Knowledge Sharing | Leave a comment

How to build your creative confidence

Often people assume that being creative and innovative is not their domain. Watch this short video by David Kelley and you will know that each one of us is creative and innovative.

Posted in Life, My Inspirations | Leave a comment