Articles


Introduction

Compared with V1.0, Nebula Graph, a distributed graph database, has been significantly changed in V2.0. One of the most obvious changes is that in Nebula Graph 1.0, the code of the Query, Storage, and Meta modules are placed in one repository, while from Nebula Graph 2.0 onwards, these modules are placed in three repositories:

  • nebula-graph: Mainly contains the code of the Query module.
  • nebula-common: Mainly contains expression definitions, function definitions, and some public interfaces.
  • nebula-storage: Mainly contains the code of the Storage and Meta modules.

This article introduces the overall structure of the Query layer and uses an nGQL statement to describe how it is processed in the four main modules of the Query layer.

Source de l’article sur DZONE

If there are top ten buzzwords in the technology industry in the year 2019, the container is sure to be one of them. With the popularity of Docker, more and more scenarios are using Docker in the front-end field. This article shows how do we use Docker in the visualization interface of Nebula Graph, a distributed open-source graph database.

Why Using Docker

Docker is widely used in daily front-end development. Nebula Graph Studio (A visualization tool for Nebula Graph) uses Docker based on the following considerations:

Source de l’article sur DZONE

This article mainly introduces how to migrate your data from Neo4j to Nebula Graph with Nebula Graph Exchange (or Exchange for short), a data migration tool backed by the Nebula Graph team. Before introducing how to import data, let’s first take a look at how data migration is implemented inside Nebula Graph.

Data Processing in Nebula Graph Exchange

The name of our data migration tool is Nebula Graph Exchange. It uses Spark as the import platform to support huge dataset import and ensure performance. The DataFrame, a distributed collection of data organized into named columns, provided by Spark supports a wide array of data sources. With DataFrame, to add new data source, you only need to provide the code for the configuration file to read and the Reader type returned by the DataFrame.

Source de l’article sur DZONE