Share the post "DGIST Develops a Technology That Can Process Large Graph Big Data with Single Computer"
A technology that can process large graph data, which is like nerve network data in a human brain, with single computer.
A research team led by Professor Kim Min-soo of DGIST's (Daegu Gyeongbuk Institute of Science & Technology) Information Communication Convergence Engineering Department made an announcement on the 7th that it has developed 'GStream 2.0' technology that can process large graph data, which used to be processed with super computers, with single computer.
A technology that was developed by this research team can process graph-type big data, which is used in variety of fields such as neuroscience, AI (Artificial Intelligence), IoT, web, social network and others, with two GPUs (major chip of a graphic card) and a computer with two PCI-e SSDs and this is the same amount as processing 256 million arteries with a speed that can process up to 2 million arteries in a second.
Nerve network in a human brain is made up of about 100 billion arteries that are called synapses. 'GStream 2.0', which can process 256 million arteries out of 100 million arteries, is a technology that can process nerve network data that corresponds to a size of 1/400th of a human brain.
Because normally cost for communication and amount of memory used exponentially increase as data is saved into many computers due to complicated structure of nerve network of a human brain, industries had had difficulties even to process nerve network that is the size of 1/1000th of a human brain.
GraphLab from Carnegie Melon University, which is considered as the school that has the most excellent performance in analyzing big data currently, can process graph data, which is composed of up to 32 million arteries, in 1,400 seconds by using a super computer with 480 CPU cores, 2TB memory, and 5GB network.
Instead of saving large graph data into many computers' memories, a research team saved it into PCI-e SSD of single computer. Research team attempted a new method that processes data by using GPU's thousands of computing cores while streaming data from SSD to GPU memory asynchronously. As a result, research team was able to solve problems such as cost for communication and amount of memory used.
As a result, single computer with two GPUs and two PCI-e SSDs processed data worth about 32 million arteries in 500 seconds and large data worth about 256 million arteries.
"We have secured a software technology that can quickly process big data with GPU and SSD." said Professor Kim. "This technology can be used to process data that is used in neuroscience and AI fields and for IoT data-based cyber security and it can be especially used to implement huge artificial nerve network in depth."
Research of this research was announced at '2016 ACM SIGMOD', which is a global scientific competition in database field that was recently held at San Francisco.
Staff Reporter Jung, Jaehoon | [email protected]
Share the post "DGIST Develops a Technology That Can Process Large Graph Big Data with Single Computer"