Literature Survey Apache Cassandra II Computer Science Essay Free Essay Example

September 7, 2021 by Essay Writer

This paper is intended to supply a elaborate description of Apache Cassandra, an unfastened beginning distributed database direction system, including the services that the system provides, the architecture and the jobs with the system. We besides give an overview of how Cassandra compares with HBase and Riak and how Cassandra supports informations storage and retrieval in Facebook.

Introduction

Cassandra is a distributed storage system which is suited for systems that need peculiarly big and scalable storage. It is an open-source system that has been described as a Bigtable theoretical account running on an Amazon Dynamo-like substructure [ 8 ] .

It is capable of pull offing structured informations that can scale to a really big extent across many trade good waiters, with no individual point of failure. Cassandra runs on top of an substructure of 100s of nodes that may be spread across many datacenters. It aims to supply dependable and scalable storage. When there is big graduated table informations that is turning continuously in size, little and big constituents fail continuously.

Get quality help now

Bella Hamilton


Verified writer

Proficient in: Computer Science For Progress





5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”



+84 relevant experts are online

Hire writer

Cassandra manages to supply dependability, high public presentation, high handiness and pertinence in such state of affairss. Cassandra portions many design schemes with databases. But alternatively of back uping a full relational informations theoretical account ; it provides a simple information theoretical account that allows us to dynamically command informations layout and format.

Architecture

Each tabular array in Cassandra is a map consisting of rows of keys and their several values. Every row has a unique key which is typically 16-36 bytes long. The value is a column household which may dwell of one or more columns.

Get to Know The Price Estimate For Your Paper

Topic

Deadline: 10 days left

Number of pages

Email

Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

“You must agree to out terms of services and privacy policy”

Write my paper

You won’t be charged yet!

These columns can be defined by the user. The figure of column households can change from user to user but they must be fixed when the bunch is started. Each column consists of a name, a value and a user-defined timestamp. The figure of columns that can be contained in a column household is really big. Each column household can incorporate one of two constructions: supercolumns or columns. Supercolumns have a name, and an infinite figure of columns associated with them. Merely like columns, the figure of supercolumns associated with any column household could be infinite and of a variable figure per key. The information theoretical account is represented in figures 1 and 2.

Fig 1: Cassandra Column Family [ 10 ]

Fig 2: Cassandra Super Column Family [ 10 ]

Consistent Hashing and Data Replication

Cassandra dividers informations across the bunch utilizing consistent hashing but uses an order continuing hash map to make so. Consistent hashing technique provides a hash tabular array functionality wherein the add-on or remotion of one slot does non significantly change the function of keys to slots. In contrast, in most traditional hash tabular arraies, a alteration in the figure of array slots causes about all keys to be remapped. By utilizing consistent hashing, merely K / n identify necessitate to be remapped on norm, where K is the figure of keys, and N is the figure of slots [ 11 ] . In consistent hashing, the nodes in the system are aligned to organize a round infinite or a ring. Each node is assigned a random value in this infinite which is the end product of a hash map and normally of the size of 128 spots. This value represents the place of the node in the ring. Each information point is identified by a key and the informations point is assigned to a node in the ring. The information point ‘s key is hashed, the nodes in the ring are traversed in a clockwise way, and the point is assigned to the first node whose place is larger than the key of the informations point. This node is deemed the coordinator for this key. This key is provided to the application and is so used by Cassandra to route petitions. Therefore, each node becomes responsible for the part in the ring between it and its predecessor node on the ring. The chief advantage of consistent hashing is that when a node leaves the pealing merely its predecessor and replacement are affected and non the other nodes in the ring. Besides, presuming that hashing is uniformly random, the burden between the nodes is good balanced. It is besides easy to follow the informations points in the ring and entree them. In order to guarantee handiness, each information point is replicated at N hosts, where N is the reproduction factor configured per-instance. Each key is assigned to a coordinator node and the coordinator is given charge of the reproduction of the informations points that fall within its scope. In add-on to locally hive awaying each key within its scope, the coordinator replicates these keys at the N-1 nodes in the ring. The reproduction takes topographic point as follows: – After the value is written to the coordinator node, a extra value is written to another node in the same bunch. Extras are so written to at least two other bunchs. Conflicts if any, are resolved by analyzing the timestamp.

Gossip manner rank algorithm

Cluster rank in Cassandra is based on Scuttlebutt [ 19 ] , an efficient anti-entropy Gossip based mechanism. Scuttlebutt offers a really

efficient CPU use and really efficient use of the chitchat channel. Within the Cassandra system Gossip is non merely used for rank but besides to circulate other system-

related control province. When a node needs to come in a system, it is foremost assigned a hash value within the cardinal infinite. Then a hunt is conducted in clockwise manner to happen the nearest nodes to the new node. The predecessor and replacement nodes are located and the node inserts itself between the two nodes on the ring. The Cassandra system elects a leader amongst its nodes utilizing a system called Zookeeper [ 13 ] . All nodes on fall ining the bunch contact the leader who tells them for what ranges they are responsible for. The metadata about the different nodes in the ring and the parts a node is responsible is cached locally at each node inside Zookeeper. This helps when a node clangs or leaves the ring all of a sudden and comes back up after a period of clip. The cache can be used by the node to acquire information about its place and the ring and what ranges it was responsible for.

ISSUES WITH CASSANDRA

The major job with utilizing the consistent hashing strategy is that there is no real-world auditability. If a datacenter fails, it is impossible to state when the needed figure of reproduction will be up-to-date. This harmonizing to [ 13 ] , “ can be highly painful in a unrecorded state of affairs when one of the information centres goes down and you want to cognize precisely when to anticipate informations consistence so that recovery operations can travel in front swimmingly ” . Besides, Cassandra relies on high-velocity fibre between information centres. When a information centre goes out and there is a demand to return to a secondary information centre which is 20 stat mis off, the high latency will take to compose timeouts and extremely inconsistent informations. Besides, there will be a bunch downtime associated with even minor scheme alterations.

Comparison WITH HBASE AND RIAK

Deinspanjer [ 12 ] provides a good comparing between Cassandra, Hbase and Riak. In HBase, the information is split into regions.A When a part file is split, HBase will find which machines should be the proprietors of the freshly split files.A Finally, the new node will hive away a sensible part of the new and freshly split informations. In Riak, the information is divided into dividers that are distributed among the nodes.A When a node is added, the distribution of divider ownership is changed and both old and new informations will instantly get down migrating over to the new informations. In Cassandra, nodes claim scopes of data.A By default, when a new machine is added, it will have half of the largest scope of data.A In footings of cost, Cassandra and Riak are much lighter on memory demand compared to Hbase and therefore much cheaper. To implement a firewall that merely focuses on warhead review, a separate bed on the front terminal will hold to be built in Cassandra and Hbase, This adds to the demands and execution clip of the usage front-end bed. In Riak, this is much easier to implement within the Riak concern logic itself. In HBase, Schema changes affecting adding or changing column households require disenabling the tabular array, in Cassandra it requires a rolled restart of the nodes, whereas in Riak, new pails and scheme alterations are wholly dynamic. In footings of dependability, Cassandra and Riak have no individual point of failure and most constellation alterations can be handled utilizing turn overing restarts, whereas in HBase upgrades require restart of the full bunch.

CASE STUDIES

Many organisations are traveling from relational databases to Cassandra due to the increasing trouble of constructing a high public presentation, write intensive, application on a information set that is may turn by springs and bounds. This growing has forced organisations to believe about horizontal and perpendicular breakdown schemes that will be more suited to supplying this high scalability and dependability.

Digg

Digg [ 15 ] is a topographic point for people to detect and portion content from anyplace on the web. Digg builds lists of popular narratives being shared across the web, each of which has a digg button. Along with that button is a figure of people from the Digg community who have said they like or digg that narrative.

In an effort to scaling their database substructure, Digg chose Cassandra to since it provides a extremely available equal to peer bunch with more than simple key to column function. They use Cassandra to deploy their green badges functionality. Badges appear on the Digg icon for a narrative when a friend has dug it. They created a set of pails per ( user, point ) brace with a list of friends who dugg the narrative.

Chirrup

Twitter migrated partially to Cassandra to back up the growing rate of information. Harmonizing to [ 16 ] , Twitter uses Cassandra to hive away and question their database of topographic points of involvement. Besides, the research squad uses it to hive away the consequences of informations mining done over their full user base. Besides, Twitter ‘s analytics, operations and substructure squads are working on a system that uses Cassandra for large-scale existent clip analytics for usage both internally and externally.

Facebook

A societal networking site like Facebook has tonss of relationships between users and their informations. In such services, informations frequently needs to be denormalized to forestall holding to make tonss of articulations when executing questions. However, denormalization causes increased write traffic. Cassandra has several optimisations to do writes cheaper. When a write operation occurs, it does non instantly compose to the disc. Alternatively the record is updated in memory, the write operation is added to the commit log and the write is besides updated on one machine. Sporadically the list of pending writes is processed and all the write operations are flushed to harrow on the other reproduction. Additionally, the writes are sorted so that the disc is written to consecutive therefore significantly bettering seek clip on the difficult thrust and cut downing the impact of random writes to the system. Cassandra is besides used to ease easiness of hunt within a Facebook inbox. There might be a demand to seek in two ways – by term and by name of individuals. For term hunt, the key is the user Idaho and the ace columns are the words that make up the message. Message identifiers that are related to that word do up the columns in the ace column. For person-based hunt, the user Idaho forms the key for each row of informations, the receiver Idahos are the ace columns and the message identifiers form the columns within the several ace column. Harmonizing to [ 7 ] , the term hunt ranges from 7.78 to 44.41 msecs, whereas the person-based hunt ranges from 7.69 and 26.13 msecs.

SOURCE

Read more