• Blog
  • /
  • Introduction To Big Data
on December 17, 2019
  • Big Data

Big Data basically refers to a huge volume of data that can not be stored or processed using the traditional approach within the given time frame. 

The amount of data can be referred to as Big data depending on the context it is being used. For example, if we are trying to attach a 100 Mb file through Gmail, that file is a big data in that context.

Big Data Analytics examines large and different types of data in order to uncover the hidden patterns, insights, and correlations. Basically, Big Data Analytics is helping large companies facilitate their growth and development. And it majorly includes applying various data mining algorithms on a certain dataset.

There are 5 ‘V’s in Big data to categorize data 

They are Value, Velocity, Volume, Veracity, and Variety.


Value refers to the value that can be derived from accessing and analyzing big data.

Velocity refers to the speed at which the data is originating and changes are coming. 

Volume indicates the amount of data being generated every second.

Veracities deal with the discrepancies found in the data.

Variety is the combination of various data types being dumped into the system.

Big data is used in various sectors such as Banking, Education, Health Care, and Social media.

Some of the tools which are used in the Big data analytics are –

Apache Hadoop

It’s a  framework that allows us to store big data in a distributed environment for parallel processing.

 It has got 2 functions specifically, data storage and data processing. Both of these occur in a distributed fashion to improve efficiency and results. 

A set of tasks known as MapReduce coordinates the processing of data in different segments of the cluster then breaks down the results to more manageable chunks which are summarized

Apache Pig

Platform that is used for analyzing large datasets by representing them as data flows. Pig is basically designed in order to provide an abstraction over MapReduce which reduces the complexities of writing a MapReduce program.

Apache HBase

Apache HBase is a multidimensional, distributed, open-source, and NoSQL database written in Java.

Apache Spark

Apache Spark is an open-source general-purpose cluster-computing framework. It provides an interface for programming all clusters with implicit data parallelism and fault tolerance.


Talend is an open-source data integration platform. It provides many services for enterprise application integration, data integration, data management, cloud storage, data quality, and Big Data.


Its software for monitoring, searching, and analyzing machine-generated data using a Web-style interface.

Apache Hive

Its a data warehouse system developed on top of Hadoop and is used for interpreting structured and semi-structured data.


Apache Kafka is a distributed messaging system that was initially developed at LinkedIn Kafka is agile, fast, scalable, and distributed by design.

Big Data Analytics is indeed a revolution in the field of Information Technology. The use of Data Analytics by various companies is increasing every year. The primary focus of them is on their customers. Hence, the field is flourishing in Business-to-Consumer (B2C) applications.

Written By
Sony Clare


Notify of