Articles

With TensorFlow.js, you can not only run machine-learned models in the browser to perform inference, but you can also train them. In this super-simple tutorial, I’ll show you a basic "Hello World" example that will teach you the scaffolding to get you up and running.

Let’s start with the simplest Web Page imaginable:


Source de l’article sur DZONE (AI)


We know Grakn can be leveraged to model highly complex data, but how do we go about building a detailed model of a real-world system?

Here, we delve into Transport for London (TFL) data to understand and gain insights into the operation of the London Underground Network.

We go on to build surely the most desirable tool for such a network: a journey planner. (Because who doesn’t want to shave 0.3 minutes off their commute?)


Source de l’article sur DZONE (AI)

The days of leaving Slack to create an event on your calendar are over!

In this tutorial, you are going to learn how to create a scheduler bot that adds events to your personal calendar with a simple Slack slash command using the Nylas Calendar API.


Source de l’article sur DZONE (AI)



At Grakn, we recently released Grakn 1.3, with a slew of new features, bug fixes, and performance enhancements. Included in this release are new gRPC-based drivers for Java, NodeJS, and Python. This article will walk you through the Python driver and provide guidelines on how you can write your own for your language of choice.

Overview

The main reason for rewriting our drivers was a move from REST to gRPC in Grakn. This change has cleaned up our API and should provide performance benefits. Further, all of our available drivers (Java, Node, and Python) now expose the same objects and methods to users, subject to language naming conventions and available types. To maintain this uniformity across the stack, new language drivers should provide the same interface. Note that you will require both gRPC and protobuf support to create a functioning driver, so double check a) that compilers for your language exist, and b) your target language version is compatible with the compiler.


Source de l’article sur DZONE (AI)

B2C is (one of) Microsoft’s offering to allow us programmers to pass the business of managing logins and users over to people who want to be bothered with such things. This post contains very little code, but lots of pictures of configuration screens, that will probably be out of date by the time you read it.

A B2C set-up starts with a tenant. So the first step is to create one.

Source de l’article sur DZONE

The Hasura platform’s data microservice provides an HTTP API to query Postgres using GraphQL or JSON in a permission safe way.

You can exploit foreign key constraints in Postgres to query hierarchical data in a single request. For example, you can run this query to fetch “albums” and all their “tracks” (provided the “track” table has a foreign key to the “album” table):

Source de l’article sur DZONE

The principle of least privilege is key when it comes to securing your infrastructure on AWS. For example, an engineer should only be able to control EC2 instances that are in scope for their day-to-day work. But how do you make sure an engineer is only allowed to …

  • Start, stop, and terminate a specific instance?
  • Create, attach, and delete specific volumes?
  • Create, restore, and delete specific snapshots?

As illustrated in the following figure you can restrict access to EC2 instances, EBS volumes, and EBS snapshots by making use of …

Source de l’article sur DZONE

The series so far: 

I am excited to share my experience with Spark Streaming, a tool which I am playing with on my own. Before we get started, let’s have a sneak peak at the code that lets you watch some data stream through a sample application.

from operator import add, sub
from time import sleep
from pyspark import SparkContext
from pyspark.streaming import StreamingContext # Set up the Spark context and the streaming context
sc = SparkContext(appName="PysparkNotebook")
ssc = StreamingContext(sc, 1) # Input data
rddQueue = []
for i in range(5): rddQueue += [ssc.sparkContext.parallelize([i, i+1])] inputStream = ssc.queueStream(rddQueue) inputStream.map(lambda x: "Input: " + str(x)).pprint()
inputStream.reduce(add) .map(lambda x: "Output: " + str(x)) .pprint() ssc.start()
sleep(5)
ssc.stop(stopSparkContext=True, stopGraceFully=True)

Spark Streaming has a different view of data than Spark. In non-streaming Spark, all data is put into a Resilient Distributed Dataset, or RDD. That isn’t good enough for streaming. In Spark Streaming, the main noun is DStream — Discretized Stream. Thats basically the sequence of RDDs. The verbs are pretty much the same thing — the way we have actions and transformations with RDDs, we also have actions and transformations with DStreams.

Source de l’article sur DZONE

I use a shell every day. Almost always, I want to repeat a previous command, or repeat it after a slight modification. A very convenient way is to use arrow-up to get the most recent command back. Another common trick is to type ctrl-R and incrementally search for a previously used command. However, there are two other tricks for repeating previous commands that I use all the time, which are not as well known.

Escape-Dot (or !$)

Often, you want to repeat only the last argument of the previous command. For example, suppose you want to run git diff path/to/tests, and then git add path/to/tests. For the second command, you can type git add escape-dot (escape followed by a period), and it gets expanded to path/to/tests (the last argument of the previous command).

Source de l’article sur DZONE