Articles

Utiliser GPT-3 dans nos applications.

GPT-3 est une technologie révolutionnaire qui peut être intégrée dans nos applications pour améliorer leurs performances et leur fonctionnalité. Découvrons comment l’utiliser !

Bienvenue à un nouvel épisode sur l’intelligence artificielle. Comme je l’ai expliqué dans mon précédent article, GPT-3 (Generative Pretrained Transformer 3) est un modèle de traitement du langage de pointe développé par OpenAI. Il a été entraîné sur une grande quantité de données et peut générer du texte similaire à celui des humains sur une large gamme de sujets. L’une des façons d’accéder aux capacités de GPT-3 est via son API, qui permet aux développeurs d’intégrer facilement GPT-3 dans leurs applications.

Dans cet article, nous fournirons un guide détaillé sur la façon d’utiliser l’API GPT-3, y compris comment configurer votre clé API, générer des réponses et accéder au texte généré. À la fin de cet article, nous aurons une base pour savoir comment utiliser GPT-3 dans nos propres projets et applications.

## Guide détaillé pour utiliser l’API GPT-3

Bienvenue dans un nouvel article sur l’intelligence artificielle. Comme je l’ai expliqué dans mon précédent article, GPT-3 (Generative Pretrained Transformer 3) est un modèle de traitement du langage de pointe développé par OpenAI. Il a été entraîné sur une grande quantité de données et peut générer du texte similaire à celui des humains sur une large gamme de sujets. L’une des façons d’accéder aux capacités de GPT-3 est via son API, qui permet aux développeurs d’intégrer facilement GPT-3 dans leurs applications.

Dans cet article, nous fournirons un guide détaillé sur la façon d’utiliser l’API GPT-3, y compris comment configurer votre clé API, générer des réponses et accéder au texte généré. À la fin de cet article, nous aurons une base pour savoir comment utiliser GPT-3 dans nos propres projets et applications.

En tant qu’informaticien enthousiaste, je voudrais partager mon expérience avec l’API GPT-3. Pour commencer, vous devez créer un compte OpenAI et obtenir votre clé API. Une fois que vous avez votre clé API, vous pouvez l’utiliser pour accéder à l’API GPT-3 et générer des réponses à partir de données textuelles. Vous pouvez également spécifier des paramètres supplémentaires pour contrôler le type de réponse que vous souhaitez obtenir. Par exemple, vous pouvez spécifier le nombre de mots que vous souhaitez générer, le type de langage à utiliser et le type de contenu que vous souhaitez obtenir.

Une fois que vous avez généré des réponses à partir de l’API GPT-3, vous pouvez les afficher dans votre application ou les enregistrer dans un fichier pour une utilisation ultérieure. Vous pouvez également utiliser ces réponses pour entraîner un modèle personnalisé qui peut être utilisé pour générer des réponses plus spécifiques à des questions spécifiques. Enfin, vous pouvez également utiliser ces réponses pour créer des applications plus intelligentes qui peuvent comprendre et répondre aux questions des utilisateurs.

En conclusion, l’API GPT-3 est un outil puissant qui peut être utilisé pour créer des applications plus intelligentes et plus interactives. Il offre aux développeurs une façon simple et rapide d’accéder aux capacités de GPT-3 et de générer des réponses à partir de données textuelles. En utilisant cette API, les développeurs peuvent créer des applications plus intelligentes et plus interactives qui peuvent comprendre et répondre aux questions des utilisateurs.

Source de l’article sur DZONE

On June 29th, GitHub announced Copilot, an AI-powered auto-complete for programmers, prompting a debate about the ethics of borrowed code.

GitHub is one of the biggest code repositories on the Internet. It hosts billions of lines of code, creating an unparalleled dataset with which to train a coding AI. And that is exactly what OpenAI, via GitHub, thanks to its owners Microsoft has done — training Copilot using public repositories.

The chances are you haven’t tried Copilot yet, because it’s still invite-only via a VSCode plugin. People who have, are reporting that it’s a stunning tool, with a few limitations; it transforms coders from writers to editors because when code is inserted for you, you still have to read it to make sure it’s what you intended.

Some developers have cried “foul” at what they see as over-reach by a corporation unafraid of copyright infringement when long-term profits are on offer. There have also been reports of Copilot spilling private data, such as API keys. If, however, as GitHub states, the tool has been trained on publicly available code, the real question is: which genius saved an API key to a public repository.

GitHub’s defense has been that it has only trained Copilot on public code and that training AI on public datasets is considered “fair use” in the industry because any other approach is prohibitively expensive. However, as reported by The Verge, there is a growing question of what constitutes “fair use”; the TLDR being that if an application is commercial, then any work product is potentially derivative.

If a judge rules that Copilot’s code is derivative, then any code created with the tool is, by definition, derivative. Thus, we could conceivably reach the point at which a humans.txt file is required to credit everyone who deserves kudos for a site or app. It seems far-fetched, but we’re talking about a world in which restaurants serve tepid coffee for fear of litigation.

There are plenty of idealists (a group to which I could easily be accused of belonging) that nurture a soft-spot for the open-source, community-driven web. And of course, it’s true to say that many who walk the halls (or at least log into the Slack) of Microsoft, OpenAI, and GitHub are of the same inclination, contributing generously to open-source projects, mentoring, blogging, and offering a leg-up to other coders.

When I first learnt to code HTML, step one, before <p>hello World!</p> was view > developer > view source. Most human developers have been actively encouraged to look at other people’s code to understand the best way to achieve something — after all, that’s how web standards emerged.

Some individuals are perhaps owed credit for their work. One example is Robert Penner, whose work on easing functions inspired a generation of Actionscript/JavaScript coders. Penner published his functions online for free, under the MIT license; he also wrote a book which taught me, among other things, that a while loop beats a for loop, a lesson I use every day — I’d like to think the royalties bought him a small Caribbean island (or at least a vacation on one).

There is an important distinction between posting code online and publishing code examples in a book, namely that the latter is expected to be protected. Where Copilot is on questionable ground is that the AI is not a searchable database of functions, it’s code derived from specific problems. On the surface, it appears that anything Copilot produces must be derivative.

I don’t have a public GitHub repository, so OpenAI learned nothing from me. But let’s say I did. Let’s say I had posted a JavaScript-powered animation from which Copilot garnered some of its understanding. Does Microsoft owe me a fraction of its profits? Do I in turn owe Penner a fraction of mine? Does Penner owe Adobe (who bought Macromedia)? Does Adobe owe Brendan Eich (the creator of JavaScript)? Does Eich owe James Gosling (creator of Java), if not for the syntax, then for the name? And while we’re at it, which OS was Gosling using back in the mid-90s to compile his code — I doubt it was named after a fruit.

If this seems farcical, it’s because it is. But it’s a real problem created by the fact that technology is moving faster than the law. Intellectual property rights defined before the advent of the home computer cannot possibly define an AI-driven future.

 

Featured image uses images via Max Chen and Michael Dziedzic.

Source

The post Poll: The Ethical Dilemma at the Heart of GitHub’s Copilot first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot

Asset management and website performance optimization are two of those unavoidable headaches faced by every website owner.

A digital asset management (DAM) platform can provide centralized asset repositories with intuitive dashboards to help you manage assets. On the other hand, an image CDN can help you get rid of that messy responsive syntax and provide dynamic asset optimization with huge performance boosts.

The problem is that website performance has become such a competitive factor that DAMs with other priorities tend to fall short. On the other hand, specialized image CDNs don’t solve the problems associated with image management, particularly within organizations.

With that in mind, I propose solving these problems for good by putting together image management and optimization stack using ImageEngine and Cloudinary. Instead of being a comparison between these two tools, this article describes the benefits of using them to complement each other.

Features and Asset Management Capabilities

As a DAM, Cloudinary provides you with a visual interface to store, manage, and edit your image and video assets. In that way, it’s not much different from any other professional image managing software such as Adobe Bridge, except that it’s an online, browser-based service.

Using the Media Library, you can upload, delete, and organize images in folders, for example. The visual image editor allows you to make advanced transformations and image touch-ups and see the results instantaneously using tools like sliders, dropdowns, etc. You can even chain transformations together for multi-layered effects.

Cloudinary also allows you to manipulate images and videos this way using their URL-based API.

Cloudinary has additional auxiliary features that make asset management easier (especially in organizations), such as backups, role-based multi-user admin, and feature extensions via third-party integrations and add-ons.

This is something most image CDNs don’t provide. Instead, they allow you to access and transform images using URL manipulation. Transformations are usually made using string-based parameters or directives. A serverless, headless DAM, if you will.

However, the advantage of using a dedicated image CDN like ImageEngine, is that it can usually provide enhanced asset optimization. ImageEngine, for example, is an intelligent image CDN that uses WURFL device detection to finely read the context an image is accessed from (device model, PPI, OS, browser, resolution, etc.) and then chooses the optimal image for that configuration.

This frees up website owners from having to do any additional optimization. This business logic is also built-in to all of their global PoP servers, and ImageEngine specifically delivers cache-hit ratios close to 100%. The following performance section will illustrate the difference this can make in practice.

Check out the key differences between ImageEngine and Cloudinary. And, for a deeper insight, see the comparison with other similar CDNs, like imgix and Cloudflare

Performance

Just to cover our bases and prove that this is an effective asset management and optimization stack, I’m also going to affirm it using a Lighthouse performance audit. Here is a quick summary of the results:

For this test, I built a web page with a tonne of images with overly large file sizes. In this first Lighthouse audit, I didn’t apply any optimization to the images. Here’s the result:

As you can see, we had some major problems when it came to the loading time of our assets. Overall, the page took more than 10 seconds to load. One of Google’s crucial user-centric performance metrics, LCP, scored a miserable 7.5s. Lighthouse suggested that some of the main problems encountered were the asset file size, inefficient cache policies, using non-optimal image formats, and improperly sized images.

Both Cloudinary and ImageEngine are supposed to address all of these factors with their auto image optimization. In the next audit, I used the same page and content but served my images via Cloudinary:

As you can see, there is improvement in most factors. FCP is now in the green, and both the Speed index and LCP times have almost halved. Even TTI and CLS improved slightly. That being said, it’s still nowhere near optimal, and we’re still falling short of the all-important 3-second loading time ceiling.

So, finally, let’s do another Lighthouse audit – this time using ImageEngine on top of Cloudinary. Here are the results:

With ImageEngine, I finally scored in the green with 95. All the metrics that have to do with the sheer speed at which image content loads improved. The Speed Index and LCP, which is the most important, improved dramatically. CLS scored worse, but this typically varies from test to test.

You can find another and more extensive breakdown of the performance and pricing comparison here.

Transformations, Bandwidth Utilization, and Cost

Cloudinary’s pricing plans work on a credit-based system. Starting with the free account, you get 25 credits/month. Each credit can be used for 1,000 transformations, 1 GB of storage, or 1 GB of net viewing bandwidth. The other two packages cost $99 for 225 credits and $249 for 600 credits, respectively.

You should plan to generate a minimum of 5 transformations per image. In effect, that limits you to around 200 images with the free plan, excluding whatever manual transformations you make.

ImageEngine’s Basic plan costs $49 and provides you with 100 GB of Smart Bytes. Smart Bytes are based on optimized image content and translate to roughly 400-500 GB of raw images.

So, with Cloudinary, you have to compromise between bandwidth and storage usage as well as the number of transformations you can make. Transformations for Cloudinary are counted as they are dynamically generated on-demand.

However, if you use ImageEngine for optimization, you can switch off Cloudinary’s auto-optimization. When a new image variant is needed, it will be generated and delivered via ImageEngine. Considering variant count isn’t limited by ImageEngine, this will drastically cut down on the number of credits you’ll need to spend on transformations.

Effectively, that means you could use the bulk of your Cloudinary credits purely for storage and specific transformations. For example, advanced cropping, applying effects, or color adjustments. These are, after all, the main functions of a DAM.

With this setup, ImageEngine’s Basic plan and Cloudinary’s free plan should be adequate for most websites, saving around $50 a month.

How to Implement Cloudinary with ImageEngine

Signing up for Both Services

As it will house all of your image assets, the logical place to start would be to sign up with Cloudinary.

Create a (free) account, and make sure to take note of your “cloud name” during the setup wizard. This will be the name of your designated storage location on the Cloudinary platform and is usually a garbled string like di2zgnxh0 by default. However, you can change this to something more meaningful.

Once you’ve signed up, you can start uploading your image assets and creating different versions/transformations of them. Setting up Cloudinary integration on a CMS, like WordPress, is usually straightforward. Just indicate the CMS you’ll be using, copy the API key, install the plugin, and activate it.

Next, sign up for a free trial with ImageEngine. There will also be a short setup wizard during which you will:

  1. Provide ImageEngine with the website where your images will be delivered.
  2. Supply your image origin (in this case, your Cloudinary web folder). For now, you can only add the Cloudinary, e.g., res.cloudinary.com.
  3. Get your ImageEngine image-serving domain, e.g., {randomstring}.cdn.imgeng.in

When in your ImageEngine dashboard, you’ll see this domain listed under “Engines” as well as an entry for Cloudinary under “Origins.” Edit the latter and under “Advanced,” add your Cloudinary folder to the “PATH” field.

That’s it, you should now be able to store and manage images via Cloudinary and serve them via the ImageEngine CDN.

Dynamically Loading Specific Image Variants

Let’s take a look at a use case for loading different transformations of individual images on your site. This example will showcase how you can use Cloudinary’s advanced image editing tools to transform images while still reaping the optimization rewards of using ImageEngine as your image CDN.

A popular practice today is to use rounded images for team, client, or profile portraits. Using Cloudinary, you can load this transformation using the following URL:

https://res.cloudinary.com/myimages/image/upload/w_400,h_400,c_crop,g_face,r_max/w_200/profile.jpg

This will resize the image to 400 by 400px, focus on the face, and apply the maximum amount of radial cropping around it – to a width of 200px.

The same image can then be accessed via your ImageEngine delivery engine simply by swapping out the domain:

https://images.myimageengine.com.imgeng.in/image/upload/w_400,h_400,c_crop,g_face,r_max/w_200/profile.jpg

NOTE: I added my Cloudinary folder designation (“myimages”) as the path to my image origin. With that config, I don’t need to include it every time I use the image URL.

For example, you can set up the origin like this:

And, then under advanced:

If I specifically wanted to load the profile picture in WebP format (for transparency support, for example), I could add the ImageEngine directive f_webp:

https://images.myimageengine.com.imgeng.in/image/upload/w_400,h_400,c_crop,g_face,r_max/w_200/?imgeng=/f_webp/profile.jpg

ImageEngine and Cloudinary – The Wrap Up

Both ImageEngine and Cloudinary are superb platforms that can make managing image and video assets easier and improve your website maintenance. However, both services have their specialty in which they outperform each other.

For ImageEngine, it’s delivering blisteringly fast image loading times in next-gen formats and with a minimal loss of visual quality.

For Cloudinary, it’s providing a visual interface to organize, store, and edit your image and video assets.

As a further incentive, letting each of these services handle what they’re best at can lead to lowering your long-term operating costs.

 

[– This is a sponsored post on behalf of ImageEngine –]

Source

The post Start Using a Smart DAM and Image Optimization Stack first appeared on Webdesigner Depot.


Source de l’article sur Webdesignerdepot

As we have discussed before, the PDF is the ideal file format for saving, sharing, and protecting documents, both small and large. Its high compatibility with most Operating Systems makes it popular amongst most users for sharing information with different parties. Furthermore, it provides a more static platform for working with important documents like contracts and manuals, as steps can be taken to prevent any unwanted access or editing to the file. 

With large and highly complex files like this, however, different systems may have difficulty uploading, downloading, and reading the formatting for your document. This can lead to file corruption or increased loading times that can halt productivity. Thus, streamlining large PDF files can greatly benefit organizations that regularly use this format in day-to-day operations. 

Source de l’article sur DZONE