Introduction to Big Data

Big data is data sets that are so voluminous and complex that traditional data-processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy and data source. There are a number of concepts associated with big data.  Originally there were 3 concepts volume, variety, velocity. Other concepts later attributed with big data are veracity (i.e.how much noise is in the data) and value.

Lately, the term “big data” tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from data, and seldom to a particular size of data set. There is little doubt that the quantities of data now available are indeed large, but that’s not the most relevant characteristic of this new data ecosystem.

Analysis of data sets can find new correlations to spot business trends, prevent diseases, combat crime and so on. Scientists, business executives, practitioners of medicine, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet search, fin-tech, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology and environmental research.

Data sets grow rapidly – in part because they are increasingly gathered by cheap and numerous information-sensing Internet of things devices such as mobile devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks.

The world’s technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s.

Relational database management systems and desktop statistics and software packages to visualize data often have difficulty handling big data. The work may require massively parallel software running on tens, hundreds, or even thousands of servers. What counts as “big data” varies depending on the capabilities of the users and their tools, and expanding capabilities make big data a moving target. For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration. Big data can be described by the following characteristics as below.

Volume:

The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not.
Variety:
The type and nature of the data. This helps people who analyze it to effectively use the resulting insight. Big data draws from text, images, audio, video plus it completes missing pieces through data fusion.
Velocity:
In this context, the speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time.
Veracity:
The data quality of captured data can vary greatly, affecting the accurate analysis.
Big Data virtualization is a way of gathering data from a few sources in a single layer. The gathered data layer is virtual. Unlike other methods, most of the data remains in place and is taken on demand directly from the source systems.
The above is a brief about Big Data. Watch this space for more updates on the latest trends in Technology.

Leave a Reply

Your email address will not be published. Required fields are marked *