Articles

Amazon Web Services (AWS) is the biggest cloud platform in the world, with over 200 features. In this article, we break down 10 AWS services that support at least some SQL syntax, talk about their use cases, and give examples of how to write queries.

Service Description SQL Support Use Case
RDS Postgres, MySQL, etc. Full Small-medium web apps
Aurora Serverless databases Full Serverless apps
Redshift Data warehouse Full OLAP, Petabytes of data, analytics
DynamoDB NoSQL database Some – PartiSQL Ecommerce, building fast
Keyspaces Managed Cassandra (key value) Some – CQL Messaging
Neptune Graph database Some – openCypher Social networks
Timestream Time series database Partial IOT, Logging
Quantum Ledger Cryptographically verified transactions Some – PartiSQL Finance
Athena Ad-hoc queries on S3 Some – CTAS Historical data
Babelfish MSFT SQL Server on Aurora Full .NET

The table above shows how SQL support varies between the services. A graph database cannot be queried in the same way as a classic relational database, and various subsets of SQL, like PartiQL, have emerged to fit these models. In fact, even within standard SQL, there are many SQL dialects for different companies like Oracle and Microsoft.

Source de l’article sur DZONE

From intrusion detection to threat analysis to endpoint security, the effectiveness of cybersecurity efforts often boils down to how much data can be processed in real-time with the most advanced algorithms and models.

Many factors are obviously involved in stopping cybersecurity threats effectively. However, the databases responsible for processing the billions or trillions of events per day (from millions of endpoints) play a particularly crucial role. High throughput and low latency directly correlate with better insights as well as more threats discovered and mitigated in near real-time. Cybersecurity data-intensive systems are incredibly complex: many span 4+ data centers with database clusters exceeding 1000 nodes and petabytes of heterogeneous data under active management.

Source de l’article sur DZONE

We can’t quit you, baseball! The season might be over, but we want more. So, we’re dipping into the baseball data to see what else we can learn. Read on for one more run around the bases!


Put Me In, Coach

This season, all anyone talked about was home runs. There were 6,770 homers hit during the regular season this year. That’s 665 MORE than the previous record! And exactly half of the teams in the league set franchise home run records. Holy homer! 

Source de l’article sur DZONE


Introduction

Compared with V1.0, Nebula Graph, a distributed graph database, has been significantly changed in V2.0. One of the most obvious changes is that in Nebula Graph 1.0, the code of the Query, Storage, and Meta modules are placed in one repository, while from Nebula Graph 2.0 onwards, these modules are placed in three repositories:

  • nebula-graph: Mainly contains the code of the Query module.
  • nebula-common: Mainly contains expression definitions, function definitions, and some public interfaces.
  • nebula-storage: Mainly contains the code of the Storage and Meta modules.

This article introduces the overall structure of the Query layer and uses an nGQL statement to describe how it is processed in the four main modules of the Query layer.

Source de l’article sur DZONE

If there are top ten buzzwords in the technology industry in the year 2019, the container is sure to be one of them. With the popularity of Docker, more and more scenarios are using Docker in the front-end field. This article shows how do we use Docker in the visualization interface of Nebula Graph, a distributed open-source graph database.

Why Using Docker

Docker is widely used in daily front-end development. Nebula Graph Studio (A visualization tool for Nebula Graph) uses Docker based on the following considerations:

Source de l’article sur DZONE

This article mainly introduces how to migrate your data from Neo4j to Nebula Graph with Nebula Graph Exchange (or Exchange for short), a data migration tool backed by the Nebula Graph team. Before introducing how to import data, let’s first take a look at how data migration is implemented inside Nebula Graph.

Data Processing in Nebula Graph Exchange

The name of our data migration tool is Nebula Graph Exchange. It uses Spark as the import platform to support huge dataset import and ensure performance. The DataFrame, a distributed collection of data organized into named columns, provided by Spark supports a wide array of data sources. With DataFrame, to add new data source, you only need to provide the code for the configuration file to read and the Reader type returned by the DataFrame.

Source de l’article sur DZONE

Ab Initio Batch Graph

Within the scope of this article, the Batch Ab Initio graph will be triggered via the scheduler and the files in the destination folder will be retrieved and the data in it will be parsed according to certain criteria.

The parsed data will be written to the database.

Source de l’article sur DZONE


Background

The Performance Challenge Championship (PCC) is an event organized by ArchNotes. After learning about the rules of the competition, I found PostgreSQL is very suitable for this scenario. The scenario is reproduced as it is, implemented with PG, but how does it perform?

The competition is described as follows (page in Chinese, but Chrome can translate): https://github.com/archnotes/PCC

Source de l’article sur DZONE

Consider the graph below. I already talked about this graph when I wrote about permission-based graph queries.

Permission-based graph queriesIn this post, I want to show off another way to deal with the same problem, but without using graph queries and using only the capabilities that we have in RavenDB 4.1.

Source de l’article sur DZONE

Imagine how much information is contained in one trillion facts. That’s roughly equal to…