Frontend Development Overview, News & Trends | The New Stack https://thenewstack.io/frontend-development/ Wed, 14 Jun 2023 14:41:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 Canva Launches Developer Platform, Eyes Generative AI Apps https://thenewstack.io/canva-launches-developer-platform-eyes-generative-ai-apps/ Wed, 14 Jun 2023 18:00:08 +0000 https://thenewstack.io/?p=22710979

Today at Canva’s first-ever developer conference in San Francisco, the company announced a new developer platform, along with a $50

The post Canva Launches Developer Platform, Eyes Generative AI Apps appeared first on The New Stack.

]]>

Today at Canva’s first-ever developer conference in San Francisco, the company announced a new developer platform, along with a $50 million “Innovation Fund.” Canva, a design platform that competes with the likes of Adobe and Figma, claims it has 135 million monthly active users. So this is potential of great interest to devs — including independent developers, who will be able to charge money for the apps they build.

To find out more about Canva’s developer platform, and why developers might want to utilize it, I spoke to Canva’s head of ecosystem, Anwar Haneef.

The key to the new dev platform is the “Canva App,” which is described as “a JavaScript file that runs inside an iframe.” The file can then be displayed within Canva — which is both a web-based service and an application across various platforms — and access a number of APIs that interact with a user’s design. To build apps, developers can access the Canva Apps SDK (Software Development Kit), which is now available publicly.

What Kinds of Apps Will Be Built?

Canva’s user base is more wide-ranging than Figma’s — it’s generally seen as a business or marketing tool, whereas Figma is explicitly targeted at designers (although I discovered earlier this year that a lot of developers use Figma too). Haneef said that its users utilize Canva for various design purposes, such as marketing and sales materials, and social media content.

When I asked what kinds of apps Canva hopes will be built for this large user base, unsurprisingly Haneef highlighted generative AI apps first. He expects to see AI apps such as virtual avatars, image manipulation apps, and photo editing tools. Indeed, one of the apps to be showcased at the developer conference today is a generative audio app, which he said will generate custom music for a Canva user.

“There’s a whole gamut of media, visual, and auditory media type of applications that we expect,” said Haneef, “especially building off of this Cambrian explosion of AI happening lately.”

Canva Apps

Canva AI Apps

Another area of interest to Canva is workflow-focused apps, continued Haneef. Canva has many users from marketing and sales backgrounds, who use the platform to create designs and incorporate assets from digital asset management suites. So he envisions apps that seamlessly integrate productivity tools, like Monday or Asana, into Canva.

A Canva App Store for Developers

Canva apps will be a combination of both free and paid options, so there will be a marketplace for the apps. Haneef said that Canva wants to make the platform sustainable for everyone involved, whether they’re developers working on behalf of external companies or independent devs hoping to monetize their app.

Developers will be able to set up subscription services or one-time payment models for their apps, Haneef confirmed. In addition, the $50m “Canva Developers Innovation Fund” is available for developers to apply to.

Clearly, Canva is targeting JavaScript developers first and foremost. But Haneef also said that it wants to entice any other frontend developer to build on Canva. The APIs and tools provided by Canva are designed to be familiar and comfortable for web developers to pick up and use, he said. They will be offering pre-built components and libraries, which he said will allow developers to create apps “in a matter of days.”

Canva Apps Marketplace

Canva Apps Marketplace

While JavaScript is the main focus, Canva also has plans to launch something called “Connect APIs”, which will be REST APIs that can connect any external app with Canva. A waitlist for this will open today and the APIs will be ready later this year, stated the company in its press release.

Haneef added that the Connect APIs won’t have an SDK, so developers will be able to use them with any programming language of their choice.

The ‘Canva For Everything’ Hype Cycle

Given its broad user base and the fact it can be used to design pretty much anything digital, Canva is often talked about on social media as a threat to other creator platforms. Just this week, Jamie Marsland (who runs a WordPress dev shop) suggested that Canva is a threat to WordPress, because “Canva’s distribution potential is absolutely enormous.” He pointed out that Canva already has a one-page site builder.

Commenters on Marsland’s tweet pointed out that Canva is more of a competitor to Photoshop currently. But this new developer platform could add a lot of nifty functionality for Canva users. Perhaps a CMS company will create an app that does indeed make Canva into a full-fledged website builder.

Regardless, it’s clear that Canva itself has grand ambitions to broaden its usage. According to Haneef, Canva aims to be “the most pluggable platform in the world.” That sounds hyperbolic — and it is — but Canva’s large user base undoubtedly makes it an attractive proposition for developers. So if you’re a dev looking for opportunities to plug in and profit, then it’s worth checking out this new Canva developer platform.

The post Canva Launches Developer Platform, Eyes Generative AI Apps appeared first on The New Stack.

]]>
Dev News: Apollo Drama, Monster API and Mobile App Discontent https://thenewstack.io/dev-news-apollo-drama-monster-api-and-mobile-app-discontent/ Sat, 10 Jun 2023 13:00:04 +0000 https://thenewstack.io/?p=22710673

Condolences to Apollo, the popular Reddit app, which will be shutting down June 30. The gritty details were posted on

The post Dev News: Apollo Drama, Monster API and Mobile App Discontent appeared first on The New Stack.

]]>

Condolences to Apollo, the popular Reddit app, which will be shutting down June 30. The gritty details were posted on Reddit, of course, but essentially app developer Christian Selig blamed Reddit’s API price increase. Selling said it was a 20x price increase, amounting to approximately $2.50 per month per user.  At that price with the apps current usage, it would cost almost $2 million per month or over $20 million per year, Selig claimed.

Apparently, things got ugly between Reddit and Selig, who offers a link to an audio of a call with Reddit and links to a screenshot of a Mastodon poster accusing him of attempting to blackmail Reddit. It seems Reddit also accused the site of scrapping, leading to Selig to post the app’s backend code on Github. The price increase also led to other turmoil for Reddit, including news of a subreddit strike planned for Monday.

Monster API Platform to Simplify AI

A new company called Monster API launched its platform this week. It’s designed to give developers access to graphics processing units (GPUs) infrastructure and pre-trained artificial intelligence models at a lower cost than other cloud-based options, according to the press statement.

Monster API uses decentralized computing to allow developers to create generative AI applications. The new platform allows developers to access AI models such as Stable Diffusion, Whisper AI and StableLM “out-of-the-box.”

Monster API’s full stack includes an optimization layer, a compute orchestrator, a massive GPU infrastructure, and ready-to-use inference APIs. It also supports fine tuning large language models such as LLaMA and StableLM.

“We eliminate the need to worry about GPU infrastructure, containerization, setting up a Kubernetes cluster, and managing scalable API deployments as well as offering the benefits of lower costs,” said Sarah Vin, CEO and co-founder of the company. “One early customer has saved over $300,000 by shifting their ML workloads from AWS to Monster API’s distributed GPU infrastructure.”

Monster API is the collaboration of two brothers, Saurabh Vij and Gaurav Vij. Gaurav, who faced a significant challenge for his startup when his AWS bill skyrocketed, according to the company statement. In parallel, Saurabh, formerly a particle physicist at CERN (European Council for Nuclear Research), recognized the potential of distributed computing in projects. Inspired by these experiences, the brothers sought to harness the computing power of consumer devices like PlayStations, gaming PCs, and crypto mining rigs for training ML models.

After multiple iterations, they successfully optimized GPUs for ML workloads, leading to a 90% reduction in Gaurav’s monthly bills.

The company promises a predictable API bill versus the current pay by GPU time. Its APIs also scale automatically to handle increased demand, from one to 100 GPUs. The company also announced $1.1 million in pre-seed funding this week.

The Mobile Release of Our Discontent

A majority of companies “are not happy” with how often they release new versions of their mobile apps, according to a survey of 1,600 companies conducted by Bitrise, a mobile DevOps platform. Sixty-two percent of teams said that their release frequency is “unsatisfactory.”

The survey found React Native is the most popular cross-platform framework, used by 48.33% of teams, followed by Flutter at 37.5%. When it comes to testing, only 10.4% of teams said they test as many devices as possible with a device farm, and 31% reported they test the most commonly used devices in their user base.

It also found that 25.7% of teams don’t have the features and functionality of their iOS and Android apps in sync.

Bitrise is also proposing a benchmark for the mobile app market similar to Google’s DORA metrics, which it calls MODAS: the Mobile DevOps Assessment. MODAS uses five key performance metrics for apps:

  • Creation
  • Testing
  • Deployment
  • Monitoring
  • Collaboration

The study also links to a number of online case studies about mobile speed, noting that when it comes to “mobile app iterations for example, speed is everything: there is a strong correlation between the frequency of updates and the ranking in the app stores.”

The post Dev News: Apollo Drama, Monster API and Mobile App Discontent appeared first on The New Stack.

]]>
Vision Pro for Devs: Easy to Start, but UI Not Revolutionary https://thenewstack.io/vision-pro-for-devs-easy-to-start-but-ui-not-revolutionary/ Fri, 09 Jun 2023 16:09:14 +0000 https://thenewstack.io/?p=22710541

“Welcome to the era of spatial computing,” announced Apple as it unveiled its latest device, a pair of mixed-reality goggles

The post Vision Pro for Devs: Easy to Start, but UI Not Revolutionary appeared first on The New Stack.

]]>

“Welcome to the era of spatial computing,” announced Apple as it unveiled its latest device, a pair of mixed-reality goggles called the Vision Pro. CEO Tim Cook described it as “a new kind of computer that augments reality by seamlessly blending the real world with the digital world.” A new operating system powers the device, called visionOS — which Apple says contains “the building blocks of spatial computing.”

If it’s “a new type of computer,” as Apple claims, then that means a new greenfield for developers. So what can devs expect from visionOS and Vision Pro? I watched a WWDC session entitled “Get started with building apps for spatial computing to find out.

“By default, apps launch into the Shared Space,” began Apple’s Jim Tilander, an engineer on the RealityKit team. “This is where apps exist side-by-side, much like multiple apps on a Mac desktop. People remain connected to their surroundings through passthrough.” (Passthrough in this case means to switch attention from the virtual world to the physical world, or vice versa.)

He then introduced three new concepts, all of them SwiftUI scenes: Windows, Volumes, and Spaces. SwiftUI has been around for four years, serving as Apple’s primary user interface framework across its various products. For visionOS, SwiftUI has been bolstered with “all-new 3D capabilities and support for depth, gestures, effects, and immersive scene types.”

Each of the three scenes is self-explanatory, but it’s worth noting that in addition to the “Shared Space” concept, Apple also has “Full Space,” which is when you want “a more immersive experience” for an application and so “only that app’s content will appear.”

It’s interesting to note that Apple appears to have a different definition of “presence” than Meta (née Facebook). Meta defines presence as “high fidelity digital representations of people that create a realistic sense of connection in the virtual world.” In other words, “presence” to Meta means being fully immersed in the virtual world. But based on the following graphic I saw in this session, “presence” to Apple means less immersion — it’s letting the physical world enter the view of your Vision Pro goggles.

Privacy Pros and Cons

Apple claims that the Vision Pro and visionOS platform treat user privacy as a core principle, while also “making it easy for you as a developer to leverage APIs to take advantage of the many capabilities of the device.”

Apple’s solution to preserving user privacy is to curate data and interactions for developers. Tilander gave two interesting examples of this.

“Instead of allowing apps to access data from the sensors directly, the system does that for you and provides apps with events and visual cues. For example, the system knows the eye position and gestures of somebody’s hands in 3D space and delivers that as touch events. Also, the system will render a hover effect on a view when it is the focus of attention, but does not communicate to the app where the person is looking.”

Sometimes “curated” data won’t be enough for developers. Tilander explained that “in those cases where you actually do need access to more sensitive information, the system will ask the people for their permission first.”

Given how potentially invasive the Vision Pro is to peoples’ privacy — including the user, since it has eye-scanning capabilities for login and tracking — the restrictions Apple has imposed on developers sound reasonable.

However, Google developer Brandon Jones pointed out on Twitter that “if you want to do AR apps, you must give Apple full rendering control.” While generally, he thinks this is a good thing — “You don’t want, for example, ads to be able to infer how much time a user spent looking at them” — he isn’t so excited about Apple “quietly re-inventing and side-stepping web standards in order to achieve that.”

In a nutshell, Apple’s privacy restrictions for Vision Pro are implemented at the OS level, giving Apple a great deal of control. Jones admitted that most developers will be comfortable with that, but he correctly noted that “Apple (already notorious for clamping down on what you can do with iOS) is doubling down on restricting the ways you can diverge from their chosen patterns.”

The Tools

“Everything starts with Xcode,” Tilander said, regarding how developers will build apps for visionOS. Xcode is Apple’s integrated development environment (IDE) and it comes with a simulator for Vision Pro and an enhanced “Instruments” performance analysis tool (which includes a new template, RealityKit Trace).

The frameworks to build 3D content are ARKit and RealityKit, which handle tracking, rendering, physics, animations, spatial audio, and more.

For visionOS, Apple is introducing a new editor called Reality Composer Pro, which “allows you to preview and prepare 3D content for your apps.” A Reddit user described it as “like Powerpoint in AR,” so the emphasis is on ease of use.

No doubt realizing that it needed more than just existing Apple developers to start thinking about developing for Vision Pro, Apple has also partnered with Unity, an existing 3D platform. In the WWDC 23 opening keynote, one of the presenters noted that “popular Unity-based games and apps can gain full access to visionOS features, such as passthrough, high-resolution rendering, and native gestures.” Tilander confirmed in his session that no Unity plug-ins would be required, and that developers can simply “bring your existing content over.”

How to Get Started

To begin a new app, in Xcode you can choose the default app template for “xrOS” (apparently the shortened version of visionOS). From there, you select a “scene type,” with the default being “Window.” This is in a Shared Space by default, but you can change that.

“And when you finish the assistant,” continued Tilander, “you are presented with an initial working app in SwiftUI that shows familiar buttons mixed in with a 3D object rendered with RealityKit.”

You can also easily convert iPhone or iPad apps into visionOS apps, noted Tilander.

Developers can expect more resources, including a developer kit, in July. An initial visionOS SDK will be available in Xcode by the end of this month.

Apple Keen for Devs to Jump Into 3D

As usual when Apple announces a new device, a lot of thought has been put into the developer tools and techniques for it. There’s nothing in visionOS that looks out of reach for existing iOS developers, so it’s a fairly seamless transition for Apple’s developer community.

Of course, the drawback is that Apple is enticing developers into yet another closed developer ecosystem. visionOS will have its own App Store, we were told at WWDC 23, but you can guarantee it won’t be any more open than the iOS App Store.

The final thing to note for developers is that the user interface really isn’t that different from iPhone, at least for the first-generation Vision Pro. “They’re still just rectangles on the internet,” as one Twitter user put it. As others have pointed out, this is probably because Apple wants to make it easy for its existing developers to start building on visionOS. Now, from a user point of view, early reports suggest that Vision Pro may indeed be magical. But from a developer point of view, Vision Pro isn’t that revolutionary — yet.

The post Vision Pro for Devs: Easy to Start, but UI Not Revolutionary appeared first on The New Stack.

]]>
Dev News: A New Rust Release and Chrome 114 Updates https://thenewstack.io/dev-news-a-new-rust-release-and-chrome-114-updates/ Sat, 03 Jun 2023 16:00:47 +0000 https://thenewstack.io/?p=22709966

The Rust team released Rust 1.70.0 Thursday, and users should see “substantially improved performance” when fetching information from the crate.io

The post Dev News: A New Rust Release and Chrome 114 Updates appeared first on The New Stack.

]]>

The Rust team released Rust 1.70.0 Thursday, and users should see “substantially improved performance” when fetching information from the crate.io index.

That’s because this release makes Cargo’s “sparse” protocol enabled by default for reading the index from crates.io. Previously, using that protocol required configuration.

It comes with a caveat, though — the upgrade changes the path to the crate cache, so dependencies must be downloaded again. The Rust team suggested developers clear out the old registry paths once they’ve fully committed to using the sparse protocol.

Also OnceCell and its thread-safe counterpart OnceLock have been established for one-time initialization of shared data.

“These can be used anywhere that immediate construction is not wanted, and perhaps not even possible like non-const data in global variables,” the team noted. “Crates such as lazy_static and once_cell have filled this need in the past, but now these building blocks are part of the standard library, ported from once_cell’s unsync and sync modules.”

Other changes in this release:

  • IsTerminal is also stabilized;
  • The Cdebuginfo compiler option now has named levels of debug information, meaning you can now set the debut levels by name: “none” (0), “limited” (1), and “full” (2) as well as two new levels, “line-directives-only” and “line-tables-only.” These naming options aren’t yet available to be used via Cargo.toml, which is expected to be supported in the next release;
  • Stable and beta builds of Rust no longer allow unstable test options, making them truly nightly-only as documented;
  • A long list of stabilized APIs.

What’s New in Chrome 114

Chrome 114 is out now and Adriana Jara, a developer relations engineer with Chrome, outlined what frontend developers need to know.

First, in one line, developers can now improve text layouts. Developers don’t necessarily know the final size, font size or even language of a text, which can make it difficult to make headlines and text blocks look … well, balanced. Now, with text-wrap: balance, developers can request the browser to figure out the best balanced line-wrapping solution.

“The balanced text block is more pleasing to the eye of a reader,” Jara wrote. “It grabs attention better and is overall easier to read.”

Second, another new feature, CHIPS (Cookies Having Independent Partitioned State) enables opting-in to third-party cookies being partitioned by a top-level site using the new cookie attribute Partitioned, she explained. Previously, an embedded site could set a cookie on one site and use it from another, which created a cross-site tracking issue.

“While cross-site tracking is an issue, there are valid cross-site cookie needs which can be achieved in a privacy-preserving way with cookie partitioning,” Jara explained. “With CHIPS, when a user visits site A and embedded content from site C sets a cookie with the Partitioned attribute, the cookie is saved in a partitioned jar only for cookies that site C sets when it’s embedded on-site A. The browser would only send that cookie when the top-level site is A.”

Then when the user visits a new site that has embedded the C website, it does not receive the cookie it set when it was embedded in the first site.

Finally, Jara explained how the Popover API makes it easier to build transient user interface elements that are displayed on top of all other web app UI. Examples include user-interactive elements such as action menus, form element suggestions, content pickers, and teaching UI.

“The new popover attribute enables any element to be displayed in the top layer automatically,” Jara explained. “This means no more worrying about positioning, stacking elements, focus or keyboard interactions for the developer.”

TypeScript 5.1 Released

Last week, The New Stack shared that Typescript 5.1 RC was available. Well, it’s been released officially, and since then the TypeScript team has made a few adjustments.

“Since the RC, we’ve iterated slightly on our built-in refactorings to move declarations to existing files; however, we believe the implementation still needs some improvements,” wrote Daniel Rosenwasser, the senior program manager for TypeScript. “As a result, you may not be able to access it in most editors at the moment, and can only opt-in through using a nightly version of TypeScript.”

The plan is either to release a patch or incorporate the refactoring into TypeScript 5.2.

The post includes a summary of what’s new in TypeScript 5.1, including:

  • Easier implicit returns for undefined-returning functions.
  • Unrelated types for getters and setters.
  • Decoupled type-checking between JSX elements and JSX tag types.
  • Namespaced JSX attributes.
  • typeRoots are consulted in module resolution.
  • Linked cursors for JSX Tags.
  • Snippet completions for @param JSDoc tags.
  • A slew of new optimizations.

The post Dev News: A New Rust Release and Chrome 114 Updates appeared first on The New Stack.

]]>
Dealing with Death: Social Networks and Modes of Access https://thenewstack.io/dealing-with-death-social-networks-and-modes-of-access/ Sat, 03 Jun 2023 13:00:17 +0000 https://thenewstack.io/?p=22710015

One increasingly common problem faced by social networks is what to do about death. Getting access to an account of

The post Dealing with Death: Social Networks and Modes of Access appeared first on The New Stack.

]]>

One increasingly common problem faced by social networks is what to do about death. Getting access to an account of a deceased friend or relative usually has at least three parts, depending on the territory:

  1. Get a copy of the death certificate;
  2. Get a letter of testamentary (both tech companies and financial institutions will request that you not only prove that the person is dead but also that you have a legal right to access their accounts.)
  3. Reach out to the platform.

This is all quite unreasonable, just to put a sticky note on the avatar page explaining why the deceased user is no longer responding. Waiting for a death certificate and other lawyerly speed activities just adds to the misery. Social media companies are not (and don’t want to be) secondary recorders of deaths; indeed we know that accounts regularly represent entities that were never alive in the first place.

What is really missing here, and what this article looks at, are different modes of access, as part of a fully functional platform. Designers need to create alternative and systematic access methods that help solve existing scenarios without having to hack their own systems.

The Case for Backdoors

The focus on security has unbalanced digital fortresses that now regard their own users’ accounts as potential risks. The term backdoor was intended to imply an alternative access route, but now simply means something to be boarded up tight at the next patch, before a security inquest. This has the unfortunate consequence of limiting the options for users.

In the early days of computing, when software was still distributed by floppy disks, people updated their applications a lot less, and alternative access to fix errors or make minor changes was quite normal. Magazines were full of cheats, hacks and hints. Some authorised, some not. Before the full suite of integrated testing became available, backdoors were often added by developers to test certain scenarios for an application. Today, we are no longer encouraged to think that we own running software at all, and that has changed how we think about accessing it.

In the example of a deceased user of a social media platform, the most straightforward solution is for a third-party legal company to hold a key in escrow. That company would then be charged with communicating with concerned humans. However, the ‘key’ would not allow a general login — it would purely be used to suspend an account, or to insert a generic account epitaph. So the third party concentrates on its role of soberly talking to friends, relatives or possibly other lawyers, while the platform can just maintain its services. (And yes, that could also mean police services could halt an account without having to negotiate with the social media company.) The agreement could be required to be set up after the account had crossed a size or time alive threshold. From a development point of view, the special access would need to be detected, and a confirmation that the account had indeed be suspiciously quiet.

Launching a Nuke

You may have seen the familiar dramatic film device where two people have to turn their keys to launch a nuclear missile, or open a safe. It is a trope even used by Fortnite.

From RFE/RL

The two-man rule is a real control mechanism designed to achieve a high level of security for critical operations. Access requires the presence of two or more authorised people. If we just step back a bit, it is just a multi-person access agreement. Could this be useful elsewhere?

Returning to examples on social media, I’ve seen a number of times when a friend has said something relatively innocent on Twitter, stepped on a plane, only to turn his network back on to discover a tweet that has become controversial. What if his friends could temporarily hide the tweet? Like the missile launch, it would need two of more trusted users to act together. Again, the point here is to envision alternative access methods that could be coded against. Given that the idea is to help the user while they are temporarily incapacitated, the user can immediately flip any action simply by logging back on.

The only extra required concept here is the definition of a set of trusted friendly accounts, any of whom the user may feel “has their back.” In real life this is pretty normal, even though we still envision social media accounts as existing in a different time and space. In fact, you might imagine that a user who can’t trust any other accounts probably isn’t suitable to be on social media.

Implementing this concept would require defining a time period after which a friendly intervention could be considered, and a way to check that the required quorum triggered the intervention at roughly the same time. One imagines that once you become a designated friend of another user account, the option to signal concern would appear somewhere in the settings of their app. This is certainly a more complex set of things to check than standard access, and it could well produce its own problems in time.

Both using a third party escrow key, or relying on a group of friendly accounts defines a three-way trust system that should be a familiar way to distribute responsibility. This is how a bank, a merchant and a buyer complete a purchase transaction. Testing these systems is similar in nature. First acknowledge the identity of the parties, then confirm that they have permission to perform the action, and finally confirm the action is appropriate at the time.

Negative Intervention

A natural variation on a third party intervention where the authorised user is incapacitated, is where a third party wants to stop an account because they think it has been hacked or stolen. The obvious difference here is that the current user cannot be allowed to simply cancel the action. Social media companies may close a suspicious account down eventually, but there doesn’t seem to be a systematic way to do this independently by users.

This is a harder scenario to implement, as it needs a way for the authentic user to resolve the situation one way or another. Social media companies do, of course, keep alternative contact details for their users. Hence the user could signal that all is well; the account really has been taken; or the account was taken but has now been recovered. But until that happens, the account is in a slightly strange state — under suspicion, yet not officially so. Should the account be trusted? Perhaps the friends themselves are not themselves?

Get Back In

If you feel the examples above are odd, you shouldn’t. They are really just extensions of what happens when, in real life, you lock yourself out of your home and fetch a spare key from your neighbour — or ask the police not to arrest you when you smash your own window to get back in. While platforms need to regard their users with less suspicion and provide more access schemes, developers also need to experiment with innovative access styles. (Actual security breaches are often caused by disgruntled staff selling sensitive data.)

There is no question that AI could help make certain assessments — the things that have been mentioned throughout this article. Is an account acting suspiciously? Has it been quiet longer than usual? Has a two-man rule been activated? Orchestration of edge case scenarios is something that AI might well be successful with, as well.

Maybe with the help of GPT and more experimentation, users may find that recovery from uncommon but unfortunate scenarios will be less fraught in the future.

The post Dealing with Death: Social Networks and Modes of Access appeared first on The New Stack.

]]>
LangChain: The Trendiest Web Framework of 2023, Thanks to AI https://thenewstack.io/langchain-the-trendiest-web-framework-of-2023-thanks-to-ai/ Thu, 01 Jun 2023 17:57:54 +0000 https://thenewstack.io/?p=22709815

LangChain is a programming framework for using large language models (LLMs) in applications. Like everything in generative AI, things have

The post LangChain: The Trendiest Web Framework of 2023, Thanks to AI appeared first on The New Stack.

]]>

LangChain is a programming framework for using large language models (LLMs) in applications. Like everything in generative AI, things have moved incredibly fast for the project. It started out as a Python tool in October 2022, then in February added TypeScript support. By April, it supported multiple JavaScript environments, including Node.js, browsers, Cloudflare Workers, Vercel/Next.js, Deno, and Supabase Edge Functions.

So what do JavaScript developers (in particular) need to know about LangChain — and indeed about working with LLMs in general? In this post, we aim to answer that question by analyzing two recent presentations by LangChain creator Harrison Chase.

LangChain began as an open source project, but once the GitHub stars began piling up it was promptly spun into a startup. It’s been a meteoric rise for Harrison Chase, who was studying at Harvard University as recently as 2017, but is now CEO of one of the hottest startups in Silicon Valley. Earlier this month, Microsoft Chief Technology Officer Kevin Scott gave Chase a personal shout-out during his Build keynote.

Chat Apps All the Rage

Unsurprisingly, the main use case for LangChain currently is to build chat-based applications on top of LLMs (especially ChatGPT). As Tyler McGinnis from the popular bytes.dev newsletter wryly remarked about LangChain, “one can never have enough chat interfaces.”

In an interview with Charles Frye earlier this year, Chase said that the best use case right now is “chat over your documents.” LangChain offers other functionality to enhance the chat experience for apps, such as streaming — which in an LLM context means returning the output of the LLM token by token, instead of all at once.

However, Chase indicated that other interfaces will quickly evolve.

“Long term, there’s probably better UX’s than chat,” he said. “But I think at the moment that’s the immediate thing that you can stand up super-easily, without a lot of extra work. In six months, do I expect chat to be the best UX? Probably not. But I think right now, what’s the thing that you can build at the moment to deliver value, it’s probably that [i.e. chat].”

Given that developing applications with LLMs is such a new thing, startups like LangChain have been scrambling to come up with tools to help navigate some of the issues with LLMs. With prompt engineering, for example, Chase indicated that it still mostly comes down to the developer’s intuition on which prompts work better. But LangChain has introduced features like “tracing” this year to help with that.

Agents

One of LangChain’s more recent features is “custom agents,” which Chase talked about at the Full Stack LLM Bootcamp, held in April in San Francisco. He defined agents as a method of “using the language model as a reasoning engine,” to determine how to interact with the outside world based on user input.

why use agents

Harrison Chase at the LLM Bootcamp.

He gave an example of interacting with a SQL database, explaining that typically you have a natural language query and a language model will convert that to a SQL query. You can execute that query and pass the result back to the language model, ask it to synthesize it with respect to the original question, and you end up with what Chase called “this natural language wrapper around a SQL database.”

Where agents come in is handling what Chase termed “the edge cases,” which could be (for instance) an LLM hallucinating part of its output at any time during the above example.

“You use the LLM that’s the agent to choose a tool to use, and also the input to that tool,” he explained. “You then […] take that action, you get back an observation, and then you feed that back into the language model. And you kind of continue doing this until a stopping condition is met.”

Typical implementation

Implementing agents.

One popular approach to agents is called “ReAct.” This has nothing to do with the popular JavaScript framework of the same name; this version of “ReAct” stands for Reason + Act. Chase said this process yields “higher quality, more reliable results” than other forms of prompt engineering.

ReAct

ReAct (not React)

Chase admitted that “there are a lot of challenges” with agents, and that “most agents are not amazingly production ready at the moment.”

The Memory Problem

Some of the issues he listed seem like basic computer concepts, but they are more challenging in the context of LLMs. For instance, LLMs usually don’t have long-term memory. As noted in a Pinecone tutorial, “by default, LLMs are stateless — meaning each incoming query is processed independently of other interactions.”

This is one area where LangChain aims to help developers, by adding components like memory into the process of dealing with LLMs. Indeed, in JavaScript and TypeScript, LangChain has two methods related to memory: loadMemoryVariables and saveContext. According to the documentation, the first method “is used to retrieve data from memory (optionally using the current input values), and the second method is used to store data in memory.”

Another form of agent that Chase talked about is Auto-GPT, a software program that allows you to configure and deploy autonomous AI agents.

“One of the things that Auto-GPT introduced is this idea of long-term memory between the agent and tools interactions — and using a retriever vector store for that,” he said, referring to vector databases.

The New LAMP Stack?

Clearly, there’s a lot of figuring out yet to do when it comes to building applications with LLMs. In its Build keynotes, Microsoft classified LangChain as part of the “orchestration” layer in its “Copilot technology stack” for developers. In Microsoft’s system, orchestration includes prompt engineering and what it calls “metaprompts.”

Microsoft has its own tool, Semantic Kernel, that does a similar thing to LangChain. It also announced a new tool called Prompt Flow, which Microsoft CTO Kevin Scott said was “another orchestration mechanism that actually unifies LangChain and Semantic Kernel.”

It’s also worth noting the word “chain” in LangChain’s name, which indicates that it can interoperate with other tools — not just various LLMs, but other dev frameworks too. In May, Cloudflare announced LangChain support for its Workers framework.

There’s even been a new acronym coined involving LangChain: OPL, which stands for OpenAI, Pinecone, and LangChain. The inspiration for that was likely the LAMP stack (Linux, Apache, MySQL, PHP/Perl/Python), which was a key part of the 1990s and led to the emergence of Web 2.0. Who knows if OPL will stick as a term — and of course, its components aren’t all open source — but regardless, it’s a good indication that LangChain is already an important part of many developers’ personal stacks.

The post LangChain: The Trendiest Web Framework of 2023, Thanks to AI appeared first on The New Stack.

]]>
30 Non-Trivial Ways for Developers to Use GPT-4 https://thenewstack.io/30-non-trivial-ways-for-developer-to-use-gpt4/ Thu, 01 Jun 2023 16:39:09 +0000 https://thenewstack.io/?p=22709637

I confess: My use of ChatGPT has been limited. I don’t like the fact it hallucinates facts. If I have

The post 30 Non-Trivial Ways for Developers to Use GPT-4 appeared first on The New Stack.

]]>

I confess: My use of ChatGPT has been limited. I don’t like the fact it hallucinates facts. If I have to check everything it says, what is the point? That said, I have used it to design a kitchen garden, which I admit is a pretty trivial use.

Developers, however, have found many non-trivial uses for GPT-4, OpenAI’s latest large language model. In a recent Hacker News thread, developers shared how they’re using the LLM. We’ve compiled the best suggestions here, along with additional suggestions from ChatGPT itself about how developers should use it.

1. SQL Queries

One poster reports being bad at writing SQL queries with bunches of joins. So, the coder just showed the bot the table definitions and told it what was desired. As is often true with AI, it may take a few iterations to get it right, the poster warned.

2. Writing RegExps

The same commenter also uses ChatGPT-4 to write regular expressions. “It is excellent at them,” the programmer reported.

3. Prototypes

Another coder uses it to brainstorm and prototype approaches to a problem. In particular, he used it mostly for machine learning pipelines, small React sites and Python command line interfaces (CLIs).

“First, I’ll ask it to give me an overview of the problem domain; this gives the LLM context,” he stated. “Then, I describe the problem and ask it to generate solutions, along with pros/cons of each approach. This is iterative: you might ask it questions, modify its suggestions, and periodically summarize. After that, you can either ask it to give you code for a prototype or build it yourself.”

4. Rubberducking

Rubberducking is the practice of debugging code by explaining the problem in spoken or written natural language. It comes from a story in The Pragmatic Programmer in which a programmer carries around a rubber duck and debugs their code by forcing themselves to explain it, line by line, to the duck. ChatGPT-4 can be used to explain the code, line by line, according to one programmer.

5. Personal Tutoring

While ChatGPT-4 “is not there yet” when it comes to pure development or peer review, said one user, it is good at clarifying follow-up questions as a virtual tutor.

6. Coding a Database

Along with 7. finding a bug in the metrics, 8. speeding up the test cycle, 9. reducing pressure on the garbage collector and tightening some timer handling, and 10. finding unnecessary type assertions.

Developer Philip O’Toole used ChatGPT-4 for #6-10 and more. It saved him so much time and he enjoyed it so much, he wrote a blog post about how it helped him code the database. At one point, he wasn’t sure how ChatGPT-4’s suggested changes would help reduce pressure on the garbage collector so he asked it to explain, which it could. He made the changes.

11. Writing Micro-Benchmarks for C++

“They always compile, but require some editing,” noted Simon Boehm. It also requires good prompting, Boehm added.

12. Explaining Assembly Code

Another way Boehm used ChatGPT-4 was to just “dump plain objdump – S output into it.”

13. Optimizing Code to Reduce Memory Usage

“I had to optimize some Python code to reduce its memory usage,” wrote Vitor Baptista. “After trying all ideas I could think of, I thought about rewriting it in a different language. Copied and pasted the code into ChatGPT 4. Tried Rust at first, but there were too many compilation errors. Then I tried Go and it worked perfectly.” The developer had never used Go and used ChatGPT-4 to improve the Go code. It gave me great answers, I think maybe once or twice the code didn’t compile (I used it dozens of times per day),” Baptista wrote. “I’m now using the optimized Go code in production.”

14. Writing in JavaScript (or Any Language That Isn’t Your Strength)

Another programmer uses it to generate code in languages that are not that well known to the programmer, which in this case was JavaScript. ”My JavaScript isn’t the strongest, so I’d prolly have to spend 30-45 minutes just coming up to speed again on my basic AJAX and modern syntax, or BAM write a schema of my idea and get GPT to get my idea on paper with halfway decent style, syntax,” the programmer stated. “I can take it from there.”

15. Looking up How to Do Something

“GPT-4 is bad at doing things and great at looking things up for you,” read one submission. “Rather than trying to get it to do things, I ask it how I should do it.”

16. Frontend Writing

One developer used GPT-4 to write a simple React app to test out an endpoint. Once the code was working, the developer put the app back into the AI and asked it to make it “more visually appealing” — and it did.

17. Testing and Documentation

One of the big predictions with AI is that it will be customized for testing. One person is already using it to do unit testing and some documentation. “ I find that the code it spits out isn’t perfect but getting some boilerplate and fixing it up is pretty fast compared to writing from scratch,” the person stated. “I’ve used this enough that I wrapped some cli glue around it…” Mostly the commentator has used it to write Python and Bash, although it’s also been used “with some Makefiles and Dockerfiles thrown in.”

20. OpenSearch (or ElasticSearch) Query Building

“I was new to the technology and their syntax took a while to wrap my head around. Instead I’d just tell ChatGPT my document format and then ask for specific data in natural language,” wrote one poster. “Fair warning, the queries were not always perfect on first try, but it was a lot easier than parsing replies to somewhat similar questions on Stack Overflow. Now I mostly write my own queries but it really helped me get started.”

21. Getting the Ball Rolling in General

The code may not be perfect, but a number of developers said they use GPT-4 to “get the ball rolling” on coding problems. One used it to get started with TypeScript and React. Essentially, the AI acts as a scaffold for learning how to do something complex. One tip for that: Iteratively ask the bot to summarize everything that’s agreed upon before asking it to write the code, suggested another programmer.

22. Image Processing

One programmer reported using GPT-4 to do image processing in OpenCV. “It’s saved a lot of time I would’ve spent figuring out the required transforms and matrix operations,” the developer wrote.

23. Shell Scripting

It’s great for any type of shell scripting and works well for fleshing out type definitions, according to one hacker news reader. Another developer reported using ChatGPT-4 to produce a working web socket server in Rust when the developer had no experience with the networking crate for async runtimes.

Inspired by these ideas, The New Stack decided to asked ChatGPT-4 itself how developers can best use it for building applications. Here’s what it suggested:

24. Natural Language Processing

“Integrate me into your application to provide natural language understanding and generation capabilities,” the AI told The New Stack. “I can assist with tasks like language translation, sentiment analysis, text completion and summarization.

25. Developing a Chatbot

ChatGPT-4 can be used as the backend for building chatbots. It can help developers create interactive user experiences by generating responses based on user queries or instructions.

26. Generating Content

“If your application requires generating content such as articles, product descriptions, or social media posts, you can utilize my language generation abilities to create coherent and contextually appropriate text,” the bot suggested and indeed, one developer on Hacker News did report using the AI to develop wiki articles pages for an encyclopedia in development.

27. Researching and Knowledge Assistance

This one seems most self-evident but is non-trivial. It can also be used to summarize dense passages developers may find in their research.

28. Integrating a Virtual Assistant

“Incorporate me into virtual assistant applications to provide intelligent and context-aware responses,” the AI suggested. “I can help users with tasks like scheduling, reminders, recommendations, and general information retrieval.”

29. Following Best Practices for Code Generation and Debugging

This may seem repetitive, but it’s worth pointing out that the chatbot can also provide insights into programming concepts and best practices.

30. Simulating End User Interactions

“Employ me to simulate user interactions and test the functionality of your application.” the AI suggested. “You can generate sample inputs, evaluate outputs, and perform scenario-based testing.”

The post 30 Non-Trivial Ways for Developers to Use GPT-4 appeared first on The New Stack.

]]>
Bluesky vs. Nostr — Which Should Developers Care About More? https://thenewstack.io/bluesky-vs-nostr-which-should-developers-care-about-more/ Tue, 30 May 2023 14:51:11 +0000 https://thenewstack.io/?p=22709450

We’re in a new golden age of decentralized versions of Twitter. Mastodon (an open source project built on the ActivityPub

The post Bluesky vs. Nostr — Which Should Developers Care About More? appeared first on The New Stack.

]]>

We’re in a new golden age of decentralized versions of Twitter. Mastodon (an open source project built on the ActivityPub protocol), Bluesky (a company building the AT Protocol) and now Nostr (an open protocol project) are all attempting to fulfill the promise of Twitter in 2007, when it was erroneously called an open platform.

Intriguingly, each of these three projects is coming at the problem of social media from a slightly different angle. Add to the list Scuttlebutt, an open protocol that I profiled in 2021, and there are now multiple open protocols challenging the proprietary software of Elon Musk’s Twitter.

Because it’s probably the least familiar, let’s start with Nostr. In a recent interview, long-time social media developer Rabble (a.k.a. Evan Henshaw-Plath) explained that he has moved his focus from Scuttlebutt to the Nostr protocol. Rabble’s product was formerly a decentralized social network product called Planetary, which had been built on Scuttlebutt. But in March, Rabble announced a pivot to Nostr, along with a new product called Nos — a Nostr client app based on Planetary.

Rabble also commented on the differences between Nostr and Bluesky (which is basically a Twitter clone at this point). One of the appeals of Nostr, he said, is the flexibility to be able to create his own app. “Bluesky is real, but I don’t know if we’re gonna be able to make Bluesky flexible in the way that Nostr has all of these apps,” he said. “So, Scuttlebutt had all these crazy apps, and Nostr has all these crazy apps.”

“The servers in Bluesky are stronger and more opinionated in how they do it, to just do the needs of cloning Twitter,” he continued. “And so that’s going to make the building of all these other crazy apps, which is part of the fun of Nostr, really hard — but it might make the just straight Twitter social-like app easier.”

Nos

Nos, Rabble’s beta Nostr app.

Part of Rabble’s reluctance to embrace Bluesky (although he is by no means against the project) is that it isn’t yet as open as Nostr. The Bluesky protocol, AT Protocol, has been developed largely in-house so far, so external developers like himself have had limited say in its future direction. Also, the way Bluesky has been implemented so far has — perhaps by necessity — been less decentralized than Rabble would like. In particular, he points out that sign-in on Bluesky is not decentralized.

“It’s as if everybody in the network were using the same key, and then we just attach different identities to it,” he said, “and so that’s not a decentralized network.” He added that the company, Bluesky, promises in its terms of service that the key is “yours and you can move it to another server,” but he wants to see it before he believes it.

Bluesky for Devs

Bluesky, the product, is currently in private beta — just today, it broke the 100,000 user mark. However, more than half of that total have yet to post more than once (see image below), so the active community is more like 40-50,000.

Bluesky stats

Bluesky stats, 30 May 2023; via Jaz.

I am one of the privileged people currently on the service and, by current standards, I am pretty active there (54 posts at the time of writing). So far, Bluesky has reminded me a lot of early Twitter. The nascent community tries to have a fun vibe — “shitposting” is encouraged on Bluesky — which is a deliberate contrast to the more earnest Mastodon community. The user experience is also a lot more polished on Bluesky than on Mastodon, so the early indications are that it has a better chance of ultimately challenging Twitter, once Bluesky is opened to the public.

Not everyone is enamored of Bluesky, though. Jack Dorsey, who initiated the Bluesky project in late-2019 as a project within Twitter (it was later spun out as an independent company) has been critical of Bluesky this year. “Unfortunately they went a bit too hard on focusing on a Twitter product and not developer community,” he wrote on Nostr in April. Nostr has seemingly become Dorsey’s favored social media account.

The same day Dorsey made that comment, Bluesky (the company) published a blog post about the AT Protocol developer ecosystem. Bluesky CEO Jay Graber claimed that there are already “many projects building on the ‘firehose’ of Bluesky app data” and that its development philosophy is “to build on existing software stacks that make interoperability simple.” She pointed out that “the at proto ecosystem” uses IPFS dev tooling, the DID specification for the identity layer, and an API that is “well specified and is simply JSON over HTTP.”

A month later and there is now a healthy list of AT Protocol projects, which at first glance bears some similarity to the third-party projects built on top of the late-2000s Twitter API.

Bluesky

Bluesky in May 2023, while still in private beta.

This does sound promising, but as Rabble pointed out, the AT Protocol developer ecosystem is fairly tightly controlled by Bluesky at the present time. Although it is all open source (unlike Twitter, even in its early days), we don’t yet know what kind of centralized pressure Bluesky (the company) might exert in future.

Indeed, one of the criticisms of ActivityPub is that Mastodon — by far the biggest project running on the protocol — might have an undue influence in the further development of ActivityPub. So there is a danger that a similar risk presents itself in the AT Protocol, with Bluesky dominating proceedings.

Why Should Devs Care About Nostr?

A key benefit that developers like Rabble see in Nostr is that there is little apparent risk of power coalescing in a centralized project (like Mastodon) or company (like Bluesky).

According to its GitHub page, the protocol “doesn’t rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, and therefore it works.”

The name of the project is an acronym for “notes and other stuff transmitted by relays.” Relays are servers, but they aren’t massive hubs like on Mastodon (at least when we’re talking about the main servers, like mastodon.social) or Bluesky. “To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself),” states the Nostr documentation.

Nostr relays

Nostr relays.

Rabble described the Nostr architecture as “small pieces loosely joined.” When he talks about why he chose to pivot to Nostr for his decentralized social media product, Rabble sounds distinctly nostalgic.

“So when I started working on decentralized social, I wanted us to go back to the world of the Facebook app platform and the Twitter API, where all sorts of developers were coming up with all sorts of crazy ideas and you didn’t need permission to do them.”

I assume he’s referring to the early years of Facebook and Twitter, but in any case, he sees a similar permission-less environment currently in Nostr. “I want that easy space by which we can have a lot of different apps,” he said. “And at the moment that exists on Nostr, but doesn’t on Bluesky — […] until the point at which they [Bluesky] don’t have a choke point to turn it off, then you can’t trust it.”

I’ve played with Rabble’s new app, Nos, and it was easier to use than another Nostr client I’d tried. That said, the Nostr network is still difficult to get your head around and so far I’ve yet to make any social connections on there. The content I’ve seen on the network has a strong libertarian bent, indicative of its roots in the Bitcoin community (both the creator, known as fiatjaf, and Dorsey are known Bitcoin proponents).

Conclusion

It’s too early to say which of AT Protocol or Nostr (or ActivityPub) is better for developers when it comes to evaluating decentralized social media protocols. Bluesky certainly is the most likely to challenge Twitter, but will it come at the expense of less control for developers? Nostr looks the most flexible of the protocols, and the ‘safest’ in terms of developer control, but it’s also the least likely to reach a mainstream user base.

Ultimately, it depends on what your goals are as a developer. If you’re aiming to reach a wide user base, Bluesky or Mastodon are your best bets. But maybe, like Rabble, you’d rather have complete control over your app’s destiny.

The post Bluesky vs. Nostr — Which Should Developers Care About More? appeared first on The New Stack.

]]>
Bad by Design: The World of Intentionally Awful User Interfaces https://thenewstack.io/bad-by-design-the-world-of-intentionally-awful-user-interfaces/ Sun, 28 May 2023 13:00:21 +0000 https://thenewstack.io/?p=22708690

They’re funny — and strangely relatable. But also thought-provoking, if not downright educational. And, lately, these intentionally bad interfaces have

The post Bad by Design: The World of Intentionally Awful User Interfaces appeared first on The New Stack.

]]>

They’re funny — and strangely relatable. But also thought-provoking, if not downright educational.

And, lately, these intentionally bad interfaces have been provoking a new round of laughter on Twitter

Because it’s 2023, one of the most insightful reactions was apparently even generated by an AI bot.

“Well, at least they’re finally acknowledging that engineers are responsible for the worst user interfaces on the internet.

“Maybe next they can have a competition for who can create the most confusing automated customer service system…”

But it turns out the comical controls are all part of a long-standing meme — a kind of internet running joke with a dire yet light-hearted warning for our times.

“I build user interfaces for a living, and my primary source of inspiration during a 20-year-long career has been bad interfaces,” said Swedish frontend developer and interface designer Hakim El Hattab. It’s all a reminder that frustrating consumer experiences can reach cosmically comic proportions. That we’re all at the mercy of engineers building our user interfaces.

And that our programming tools are now powerful enough to let bored internet satirists dream up alternate interfaces whenever they feel like it…

Challenge Accepted

All the bad design craziness apparently started back in 2017 in Reddit’s “Programmer Humor” subreddit, when a user posted a tall green volume bar that needed to be dragged… horizontally. “Who can make the best volume slider? the post had asked.

And the message was heard, the challenge accepted…

“It was wild,” remembered a recent comment on Reddit. “For weeks the subreddit was just terrible volume controllers.” After 11,000 upvotes and 403 comments, the original discussion thread was permanently archived (and closed for new comments).

But then dozens of new discussion threads were started for each new, bad volume-control interface.

GPS Volume Control
by u/mrzacharyjensen in ProgrammerHumor

Over the years the best ones have been featured in writeups by sites like Bored Panda and the Verge. Back in 2017, designer Fabricio Teixeira even collected them together for a post on his design blog UX Collective, calling their attempts “a fun exercise/joke, that can strengthen one’s creative muscles and ability to think outside of the box.”

Yet these curated collections only scratched the surface of the original event, missing the scope of just how many more different volume controls were created. Even the ones circulating now on social media had each offered their own tantalizing headlines like:

There were many more — and all of them were oh-so-creatively bad. “It’s like a daily Far Side comic, with a volume theme…” one Redditor had posted at the time — adding “I kinda hope it goes on forever.” But along the way, maybe this spontaneous outpouring also offered some grassroots meta-commentary about our own over-engineered world…

Implicit Critiques

One poster imagined a volume control interface that subjected its input to the usual checks for password strength. (“Your volume must have at least six characters… Your volume must contain at least one uppercase letter… “)

Another imagined a volume control whose upper bound was…the amount of charge left in your battery. (“Please charge device to increase volume…”)

Running low on volume
by u/fluiux in ProgrammerHumor

And then there was the seemingly innocuous volume controller that demanded of its users, “To change volume to 35%, please solve CAPTCHA below…”

The blog post on Teixeira’s site argues that “there’s definitely a reflection point about the state our industry here.”

In short, Teixeira believes designers today often feel the urge to innovate (not mention professional pressures) — and that’s met with the easy availability of tools. But that doesn’t mean that innovation needs to happen, Teixeira writes.

“Let’s be honest: the volume control design pattern has been around for decades, works pretty well for the majority of users, and is incredibly familiar to a lot of people. You don’t need to reinvent it.”

“People expect interfaces to look and behave in a predictable way,” acknowledged UI designer Hakim El Hattab — before emailing me two of their alternate versions of a checkbox. “I think it’s a lot of fun to break the rules and try to surprise people.”

But even as the original meme offered its implicit critique of the design industry, there were also some moments that were oddly educational for programmers. One maker actually built their volume control in the real world — using an Arduino that adjusts the volume based on the amount of heat detected by a temperature sensor.

And one stripped-down user interface even needed the desired volume to be spelled out in Morse code.

Single button volume interface
by u/LinAGKar in ProgrammerHumor

That same thoughtful spirit continues to this day — even as the phenomenon has become almost a Reddit institution. There’s now a subreddit on the site dedicated just to “Intentionally bad User Interfaces,” which was “spawned” from the “ProgrammerHumor” subreddit and its “constant need to find bad UI then make more of it” (according to the subreddit’s official description).

Named “badUIbattles,” this subreddit now has 203,000 members committed to creating “bad UIs just for the sake of them being bad.”

And yet to this day, there’s also a tradition of encouraging posters to share their source code. “I think it’s important to share the source wherever possible so that others can learn,” Hakim El Hattab told me. “I learned that way myself and it’s nice to pay it forward.”

Spreading to the Web

In the new subreddit, El Hattab recently shared their own perfectly maddening interface that perhaps sends a second message: That the people who make “unsubscribe” buttons secretly don’t want you to.

Working on my new unsubscribe page
by u/hakimel in badUIbattles

A hint may be hidden in its URL: clickmeifyoucan.

But part of the subreddit’s charm is reading the comments, where people post their honest reactions, marvel at the UI’s ingenuity and share a laugh.

“Thanks, I hate it”

“In germany, stuff like this would be illegal”

And then there was that one user who wrote sardonically…

“I would like to subscribe to your newsletter…”

So the tradition continues as the years roll along, and the new subreddit even inspired Brazil-based physics engineer (and self-taught programmer) André Goulart Nogueira to create a web repository for all the “best (or worst?) bad-UI I’ve seen.”

And Nogueira’s own creations also appear on that page, including an interface for submitting your phone number which uses nothing but a sliding selector that scrolls through… every possible phone number. (Selecting “Advanced Mode” even activates a second slider which moves the first slider — but only if you tilt it just right…)

Meanwhile, Nogueira’s “birthday selector” also seems deceptively easy — until you realize it wants you to scroll through every day of your life until you reach the exact date of your birth. (Although two additional options let you start with the year 1 A.D. — or with the year 9999…)

And over in the badUIbattles subreddit, another user shared their even more diabolical creation: the notorious “tabbed keyboard”. Implemented with some simple JavaScript (in an HTML/CSS webpage), its distinguishing feature is really its lack of a keyboard. (You can try it yourself in a web browser.) The “Enter User Name” window just contains a single key — marked with a “plus” symbol — that when clicked will create a second window with exactly one more key — Which you can now use only for typing its one randomly-assigned letter of the alphabet.)

Tab back to that first window — the plus sign — and you can click the “plus” key again to create another window with a key for typing one more letter… Then continue until you’ve randomly received enough letters to type out your entire user name. (Plus, the additional “submit” key you’ll actually need for entering that name…)

The punchline? The user interface is programmed to then tell you “Username already taken. Please pick another.”

The creator joked on Reddit that it’s good for people with poor eyesight — since to view each single key, they can use the entire screen.

And again, comments of appreciation flooded in.

“God it’s awful… good job.”

“This is even worse on mobile and I love it.”

One commenter even sarcastically applauded the interface for “ensuring the screen isn’t cluttered by any functions you don’t need at that exact moment. Truly the future.”

“When will user experience jokes not be funny? Probably never,” quipped the Verge, “as services/products continue to be in a constant race with themselves to make things ‘better’ while often neglecting how we interact with them…” And sure enough, back in Reddit’s original “ProgrammerHumor” subreddit, yet another intentionally-bad volume control interface appeared earlier this year.

Though this one appears to be more of a joke about Vim.

I’ve suddenly remembered the old challenge to make the worst volume slider, so here’s my entry. Unexitable
by u/sicve011 in ProgrammerHumor

But maybe satire is the sincerest form of criticism, pointing a way forward to a better world. “It’s incredibly frustrating when a simple task is made difficult by a poorly designed interface,” Hakim El Hattab told me.

“This frustration with bad interfaces has taught me what to avoid and enabled me to create more user-friendly interfaces at work.”

The post Bad by Design: The World of Intentionally Awful User Interfaces appeared first on The New Stack.

]]>
Dev News: New Microsoft Edge Tools and Goodbye Node.js 16 https://thenewstack.io/dev-news-new-microsoft-edge-tools-and-goodbye-node-js-16/ Sat, 27 May 2023 13:00:34 +0000 https://thenewstack.io/?p=22709332

Microsoft’s web development team announced a number of changes that should make Edge faster while improving the developer experience, according

The post Dev News: New Microsoft Edge Tools and Goodbye Node.js 16 appeared first on The New Stack.

]]>

Microsoft’s web development team announced a number of changes that should make Edge faster while improving the developer experience, according to a presentation at Microsoft Build conference on Wednesday.

The updates include:

  • A built-in JSON Viewer is currently an experimental feature in Edge 114. Activating this feature will cause any URL that returns a JSON resource to load directly into the browser in the JSON viewer.
  • Microsoft Edge Dev Tools, which are built into the browser. “It’s a set of tools that appear next to the rendered webpage in the browser, and provide a powerful way to inspect and debug web pages and web apps,” Zohar Ghadyali, program manager on the Microsoft Edge Dev Tools team, said. “As a non-exhaustive list, you can use the dev tools to inspect, tweak and change the styles of elements in the web page using live tools with a visual interface, inspect network traffic and see the location of problems like resources that fail to load and debug your JavaScript using breakpoint debugging and with a live console.” In all, there are 33 tools.
  • Focus Mode. To help manage the visual overhead of the 33 tools, the browser also offers Focus Mode, wherein the top bar is customizable to hold a developer’s primary tools and the bottom bar contains the rest. “In addition to moving the doc location of dev tools, you can also customize the positioning of the activity bar,” Ghadyali said. “If you like the older Dev Tools UI and you like the horizontal toolbar, you can leave the activity bar in this orientation. However, if you like the way VS code is organized, instead, you can move the activity bar to a vertical orientation.” Focus Mode also incorporates a quick view function that allows the developer to look at two tools simultaneously.

There are also new features in dev tools to improve performance and add context to things such as enhanced traces, including .devtools file, which is the new file format for enhanced traces from Microsoft Edge.

“The key benefits of enhanced traces are that even if server-side changes are made, you’re preserving the state of your source code and the state of your webpage,” Ghadyali said. “This means you can package and share self-contained Dev Tools instances when collaborating with your teammates or co-workers.”

This solves the “it doesn’t work for me” problem. Instead of getting a rogue console error and then trying to show a colleague how you triggered that state, you can instead export and enhance trace and share the dot Dev Tools file with them, and the state will be preserved, Ghadyali explained.

Another new feature is the “select your stats” feature in the performance tools, which is designed to help developers understand what is happening during long-running recalculate style events in the performance tool.

There’s also support for faster debugging using source maps, which map from the transformed source to the original source, allowing the browser to reconstruct the original source and present the reconstructed original in a debugger.

Edge also has a new experimental crash analyzer tool.

“With the crash Analyzer tool, you can input a JavaScript stack trace, like those that you get for non-fatal JavaScript exceptions, and have your source maps applied to the salary so that you can debug faster,” said Rob Paveza, a principal software engineering manager at Microsoft.

Paveza also shared with the audience anti-patterns that can slow web performance.

Vercel to Deprecate Node.js 16

Bad news for those who don’t want to upgrade: Vercel is deprecating Node.js 16, beginning Aug. 15, 2023. Node.js 16 will reach official end of life on Sept. 11, 2023. Node.js 14 reached official end of life on April 30, 2023.

“On Aug. 15, 2023, Node.js 14 and 16 will be disabled in the Project Settings and existing Projects that have Node.js 14 and 16 selected will render an error whenever a new Deployment is created,” Vercel noted in its announcement. “The same error will show if the Node.js version was configured in the source code.

It added that while existing deployments with Serverless Functions will not be affected, developers should upgrade to Node.js 18 in order to receive security updates.

Microsoft’s TypeScript 5.1 RC Available

Microsoft’s Daniel Rosenwasser, senior program manager of TypeScript, published a detailed look at what’s new in its TypeScript 5.1 release candidate since the beta. Among the changes for developers are:

  • Corrected behavior for init hooks in decorators
  • Changes to emit behavior under isolatedModules, ensuring that script files are not rewritten to modules
  • New refactoring support for moving declarations to existing files

The post also reviews all the changes that 5.1 incorporates now, and Rosenwasser noted that they anticipate very few additional changes before the stable version of TypeScript 5.1 in a few weeks. In fact, the team already recently published the TypeScript 5.2 iteration plan, which will incorporate decorator metadata. Support for decorators is expected to be available in the next JavaScript release.

The post Dev News: New Microsoft Edge Tools and Goodbye Node.js 16 appeared first on The New Stack.

]]>
4 Anti-Patterns That Microsoft Recommends Web Devs Avoid https://thenewstack.io/4-anti-patterns-that-microsoft-recommends-web-devs-avoid/ Fri, 26 May 2023 13:56:51 +0000 https://thenewstack.io/?p=22709339

Microsoft has spent the past year “diving deep into web performance,” Rob Paveza, a principal software engineering manager, said during

The post 4 Anti-Patterns That Microsoft Recommends Web Devs Avoid appeared first on The New Stack.

]]>

Microsoft has spent the past year “diving deep into web performance,” Rob Paveza, a principal software engineering manager, said during Microsoft’s conference, Build, a live and in-person event held this week.

Paveza and Zohar Ghadyali, program manager on the Microsoft Edge Dev Tools team, introduced a slew of new and experimental features available in Edge 114, which is in beta but scheduled for stable release June 2. The features are designed to improve performance as well as developer experience and include debugging assistance, a new JSON viewer, new dev tools, and Focus Mode.

But the team didn’t just examine what it could do differently to address web performance. Paveza also identified four common anti-patterns that developers can avoid to improve the speed of their web applications.

Anti-Pattern One: Using SVG Animations

SVG animations can be used to apply transformations, but they cause abnormally high CPU usage and “unexpected outcomes that we were surprised to see,” Paveza said, adding that it “caused a bit of work for the web platform team to go investigate and track down.”

What the team learned is that CSS animations have better performance. The animation tool and Dev Tools has much better support for CSS animations, he added.

Anti-Pattern Two: CSS Properties That Trigger Reflow

“Another issue that the browser runs into is reflow,” Paveza said. “This is when the browser recalculates the positions and geometries of the elements on the page with the goal of re-rendering part or all of the page. This is a costly operation and can block user input.”

Developers should minimize using CSS properties that trigger reflow or changing them, he said.

“Some CSS properties can actually be handled by the compositor thread instead of the main thread,” he added. “When you do have to change these properties at runtime, the ones that are better to use are those like transform and opacity. So we strongly encourage you to do that.”

Anti-Pattern Three: The Large DOM

It makes sense intuitively that the browser will take longer to handle DOM mutations, or recalculate cells, or compute layout when there’s a large DOM, simply because there are more elements on the page, Paveza said.

“When you’re using libraries like React that render components for you, we found some techniques like using React fragments, or just plain open bracket close bracket, instead of actual elements, can improve performance,” he suggested.

Another important technique to know about is the Shadow DOM.

“This is another major functionality of the web platform that we want to call out, because it provides containment, meaning that subsets of your DOM can mutate or need to be restyled without affecting all of the other elements outside of that shadow,” he said.

Anti-Pattern Four: CSS in JavaScript

Keep your CSS out of your JavaScript for faster performance, advised Pavez.

“CSS in JS absolutely has some benefits from a developer ergonomics perspective — you can write styles for specific components, you can specify dynamic styles, and it simplifies the amount of things that you have to juggle as a web developer between JavaScript, HTML and CSS,” Paveza explained. “However, we absolutely have observed poor performance.”

Injecting these styles into JavaScript takes longer to handle than vanilla CSS, he added.

“We strongly recommend extracting your CSS in JS to a stylesheet, and then serving that alongside your JavaScript for better performance,” he added.

The post 4 Anti-Patterns That Microsoft Recommends Web Devs Avoid appeared first on The New Stack.

]]>
New Image Trends Frontend Developers Should Support https://thenewstack.io/new-image-trends-frontend-developers-should-support/ Thu, 25 May 2023 13:00:55 +0000 https://thenewstack.io/?p=22708951

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>

Media management firm Cloudinary is working on a plug-in that will enable developers to leverage its image capabilities from within ChatGPT.

It’s part of keeping up with new technologies that, like AI, are changing user expectations when it comes to a frontend experience, said Tal Lev-Ami, CTO and co-founder of online media management company Cloudinary.

“If you look at e-commerce, many websites now have ways to know what you want to buy the 360 [degree] way and some of them also have integrated AR experiences where you can take whatever object it is and either see it in the room or see it on yourself,” Lev-Ami told The New Stack. “These are considerations that are becoming more critical for developers to support.”

Another thing developers should consider is how AI-enabled media manipulation will alternate the expectations of end users. He compared it to the internet’s shift from simply text to using images. Images didn’t replace text, but users suddenly expected images on web pages.

“The expectations of the end users on the quality and personalization of the media is ever increasing, because they see ads and they see more sophisticated visual experiences,” he said. “It’s not that everything before is meaningless; it’s still needed. But if you’re not there to meet the expectations of the end user in terms of experiences, then you’re getting left behind.”

Supporting 3D

There are challenges around supporting 3D, such as how to optimize images and (for instance) how to take a file developed for CAD and convert it to a media 3D format that’s supported on the web, such as glTF, an open standard file format for three-dimensional scenes and models, Lev-Ami said.

A case study with Minted, a crowdsourced art site with 59.8 million images, offers a look at what’s required to support 3D. Minted used Cloudinary to improve its image generation pipeline with support for a full set of 2D and 3D transforms and automation technology. A single product at Minted can have more than 100,000 variants, according to a case study of Minted’s Cloudinary deployment.

The case study explained how the art site worked with the media company to create a 3D shopping experience. First, the image of the scenes are created in a studio, then an internal image specialist sliced the image into layers and corrected for transparency, color and position. A script was then used to generate the coordinates needed to position these layers as named transforms into a text file (CSV), which when uploaded to Cloudinary (with the previously created screen layers) created the final image.

Separately, Minted’s proprietary pipeline ingests raw art files from artists and builds the base images for each winning design. When a customer navigates to an art category page or product details page on Minted, the page sends requests to Cloudinary for images that composite the correct combination of scenes, designs, frame and texture into the final thumbnails, the case study explained.

“For close-up product images, Minted makes use of Cloudinary’s 3D rendering capability as well as its e_distort API feature,” the case study noted. “A 3D model with texture UV mapping was created for the close-up image that shows off the texture and wrapping effect of a stretched canvas art print. With some careful tweaking of the 3D coordinates, the model is uploaded and Cloudinary does the rest, composing the art design as texture onto the model.”

Bring Your Own Algorithms

WebAssembly is another relative newcomer technology for the frontend, where it can be used to deploy streaming media, so I asked Lev-Ami if Wasm is also changing how media works on the frontend, or perhaps in how Cloudinary manages its own workload? While Cloudinary does deploy Wasm to support edge computing, the company also allows developers to upload Wasm and run their own algorithms.

“We actually have a capability where you can upload your own Wasm so that you can run your own algorithm as part of the media processing pipeline,” he said. “If you have some unique algorithm that you want to run as part of the media processing pipeline, you can do that. The safety and security around Wasm allows us to be more open as a platform and allows customers to handle use cases where they need to run their own algorithms part of the pipeline.”

Wasm has fewer security risks than code because it executes within its own sandbox, according to Andrew Cornwall, a senior analyst with Forrester who specializes in the application development space. Code compiled to WebAssembly can’t grab passwords, for instance, Cornwall recently told The New Stack.

The post New Image Trends Frontend Developers Should Support appeared first on The New Stack.

]]>
Microsoft One-ups Google with Copilot Stack for Developers https://thenewstack.io/microsoft-one-ups-google-with-copilot-stack-for-developers/ Wed, 24 May 2023 18:45:52 +0000 https://thenewstack.io/?p=22709048

No prizes for guessing the focus of this year’s Microsoft Build developer conference. Of course, it was AI — just

The post Microsoft One-ups Google with Copilot Stack for Developers appeared first on The New Stack.

]]>

No prizes for guessing the focus of this year’s Microsoft Build developer conference. Of course, it was AI — just as it had been at Google I/O earlier this month. But whereas Google’s AI announcements seemed disorganised and all over the map, Microsoft came up with a cohesive framework to entice developers: its new “Copilot stack.”

In the opening keynote, Satya Nadella positioned the ChatGPT-inspired AI era as the latest installment in society’s pursuit of the “dream machine” (referencing M. Mitchell Waldrop’s 2001 book about J.C.R. Licklider).

The main focus of his presentation, though, was Copilot. While he started out referencing GitHub Copilot, the first Microsoft collaboration with OpenAI, it was only a matter of time before Windows was inserted into proceedings. “Next, we are bringing Copilot to the biggest canvas of all, Windows,” he announced to the live Build audience. That got the biggest cheer of the day.

Nadella then introduced a video about chat AI functionality that began with the magic words, “integrated into all of Windows.” Later in the opening keynote, other forms of Copilot were introduced — including Microsoft 365 Copilot, for office workers.

Technical Details about Copilot Stack

In the second keynote, Microsoft Chief Technology Officer Kevin Scott got into more detail about the new copilot initiatives, from a developer perspective. He began by highlighting Microsoft’s partnership with OpenAI, putting the success of the relationship down to Microsoft having “an end-to-end platform for building AI applications.” He then positioned Azure as “the cloud for AI” and Windows as “the best client for AI development.”

Kevin Scott

Scott then brought on stage Greg Brockman, the president and co-founder of OpenAI. The discussion between the two around ChatGPT plugins was particularly enthusiastic, with Brockman encouraging developers to “really go into specific domains and figure out how to make this technology work there.” He used the example of plugins in the legal domain, where developers are “getting expertise and talking to lots of lawyers and understanding what their pain points are with this technology.”

Then came the big news. Scott said that Microsoft has built a “Copilot technology stack” for developers, enabling them to add AI functionality to any software — in other words, a “copilot.”

Copilot stack

Commenting on the frontend layer, Scott said that with a copilot, “it’s going to be way less of that fiddling around mapping user interface elements to little chunks of code than you’re accustomed to.”

For the orchestration layer, Scott described it as “the business logic of your copilot.” In LLM terms, it’s where the prompting happens. Microsoft’s orchestration mechanism to help build its apps is called Semantic Kernel (see my separate writeup about that), which has been open sourced. Scott added that there are other open source alternatives for orchestration too — he gave a special shoutout to LangChain. In addition, Microsoft has a new tool called Prompt Flow, which Scott said was “another orchestration mechanism that actually unifies LangChain and Semantic Kernel.”

Part of Microsoft’s orchestration layer is the “meta prompt,” which Scott described as a “standing set of instructions that you give to your copilot that get passed down to the model on every turn of conversation.” He added that it’s “where a bunch of your safety tuning is going to happen.”

“Grounding” is where we get into things like vector databases and “Retrieval Augmented Generation” (RAG), both of which I discussed in my recent interview with Pinecone. Scott described grounding as “all about adding additional context to the prompt that may be useful for helping the model respond to the prompt that’s flowing down.”

Grounding

Finally, at the bottom of the stack are foundation models and infrastructure. “We give you a bunch of choices for how to use foundation models in this copilot platform, on Azure and on Windows,” Scott said.

Azure AI Studio

The third keynote featured Scott Guthrie, EVP Cloud + AI, and several of his Microsoft colleagues. Part of what he was promoting was Microsoft’s development platform for creating a ChatGPT plugin. “We’re embracing an open plugin standard that provides plugin interoperability across ChatGPT and all of the Microsoft Copilot offerings,” he said.

One of the more interesting products Guthrie referenced was the Azure AI Studio, which he said “makes it incredibly easy to ground your AI models with your own data and to build Retrieval Augmented Generation (or RAG) based solutions.” He added that it enables you to “build your own copilot experiences that are specific to your apps and organizations.”

Scott Guthrie

For prompt engineering, which Guthrie reiterated is a part of the orchestration layer of the copilot stack, Microsoft is introducing a new framework called Prompt Flow.

“Prompt Flow provides [an] end-to-end AI development tooling that supports prompt construction, orchestration, testing, evaluation and deployment,” he explained. As Kevin Scott had hinted at earlier, Prompt Flow can leverage Semantic Kernel and LangChain as well.

Finally, Guthrie announced Microsoft Fabric, “a unified platform for data analytics, really designed for the era of AI.” He added that “it’s lake centric and has an open data architecture, and it has deep integration with Microsoft 365.”

Overall, this felt like a much more cohesive set of AI announcements than what Google announced recently. The Copilot stack in particular will surely resonate with Microsoft’s enterprise-focused developer community.

The post Microsoft One-ups Google with Copilot Stack for Developers appeared first on The New Stack.

]]>
How to Start a Software Project: A Guide for Junior Devs https://thenewstack.io/how-to-start-a-software-project-a-guide-for-junior-devs/ Sat, 20 May 2023 14:00:35 +0000 https://thenewstack.io/?p=22708121

“OK, let’s start coding!” However exciting these words are, they are far more comforting when it won’t be you who

The post How to Start a Software Project: A Guide for Junior Devs appeared first on The New Stack.

]]>

“OK, let’s start coding!” However exciting these words are, they are far more comforting when it won’t be you who has to do all the work to kick everything off.

Consequently, starting a software project is a real divider between the experienced senior and the eager junior — and so I recommend that tyro devs get very familiar with all the areas that need to be covered, then have a go with a project that doesn’t have too many eyes on it. Many decisions can be delayed, and certain things can be trivially changed without any side effects, but some items are more expensive to alter later. This post is about where to start, and what bits are best to get right early on.

What Good Looks Like

The number one killer of all projects — even those that are not scrutinized in any way — is that their worth cannot be measured. Even little habits you start for yourself, like going to the gym or starting a diet, get quietly dropped if you see no measurable progress. In industry, unmeasurable projects may look good, but they have an inbuilt kill switch because you cannot prove how they add any value. Remember all those slightly annoying surveys you got in your inbox asking you questions about a website or service you just used? These are people making a solid effort to measure intangibles like “knowledge share.” However hokey it is, try to build a measurement track into your project from the start. There are various things to measure, from early adoption, to unique page views. Conversely, you can measure the decline in a bad thing your project is trying to prevent.

Keeping an up-to-date project wiki is the key to stopping early problems from spreading because of unclear aims. Write down what the project should achieve, the basic components you think are needed, who the stakeholders are and, yes, a few target dates. Novel situations will always occur, and people will do the wrong things — and frankly, some of the decisions you make early on will be faulty. None of these cause chaos. That is caused when people don’t share a strong enough idea of a project’s direction with anyone else. Just writing enough down is quite easy to do, and stems a lot of doubt.

The MVP and the Fakes

The first thing you produce or build should just be in the shape of what you need, but little else. If the product is a website, then it should match the format you want, but just be filled by a “Lorem ipsum” placeholder filler. What matters is that the MVP (minimum viable product) forces you to focus on the infrastructure you need to get in place.

Do you need access to an expensive service? Are you charging money? Is your service time-dependent in some way? In almost all cases you will need to fake one or more components of your project in order to test it, without invoking real-world costs or conditions.

Sometimes quite a bit of work is needed to create the fake environment, but that must be done. For example; while it is much easier to charge money than it used to be, we’ve all seen services that try to introduce charging only to discover that it isn’t so easy to plug in later (because of all the previous assumptions).

Services like AWS Lambda are very good for building cheap fakes, as they only charge when triggered. Fake data also needs to be considered. Testing on data that doesn’t match your product’s actual customer use will inevitably make for bad outcomes. A case in point was an institution that used obfuscated live data for testing. But the data was so heavily disguised that it destroyed the natural relationship between customers (for example, real people live together) and so it caused problems later.

Identities and Who Does What

One of those “hard to alter later on” decisions is forgetting to create email addresses, domain entries and accounts for your project, and instead doing these on your own account details, because you wanted to save time. Don’t do this. It doesn’t matter if you use a domain name or email address that doesn’t match the final identity — it matters that these are not connected to you or anyone else. Otherwise, the whole project goes on holiday when you do.

If you are fortunate enough to have help, then you need to split up the work into sensible portions. Fortunately, the agile methodology works very well for developers on starting projects — because at the beginning you have nothing but a list of tasks to be achieved. People can only take on the tasks if they understand them, which forces you to define them clearly. The same is true if you plan to use AI help — record whatever prompts you use. To start with, this is all you need. The agile mantra is:

Make it work, make it right, make it fast.

So start by making it work with whoever is onboard.

Environments and QA

If you start by understanding what to measure, and where to fake, you will probably find testing and Quality Assurance (QA) follow on naturally. You can use Jira or Trello to communicate with your testers, but whatever you choose should mesh with the tools you use to split up your stories and tasks. The world of containers has massively improved the chances that any environment you build in is pretty darn close to the environment your testers are using.

If you are behind a firewall, now is the time to make good friends with the security team. Otherwise, you will quickly find that you cannot share anything with any offshored testers.

When I say environment, I mean staging, QA and production. If you remove these terms for a moment, we are generally just talking about virtual computing spaces with different configurations. So for example, the QA environment allows your testers to play with the latest stable build and is configured to work with fake services. Scripting to create your environment will involve some type of playbook — make sure you have the skills available to do that.

Developer Tooling

How to actually write the code comes much lower down the priority list than you may have imagined, because it is much easier to setup and change. You can’t blame software developers for wanting to focus on frameworks, coding standards and editors — as that is our stock in trade. Most initial decisions can be altered later. In fact, rewriting your codebase should be something you aim to do at some point; it isn’t something to avoid altogether. But, like going to the toilet, just don’t wait until you have to.

The bigger IDEs tend to include dummy projects and lots of services that can help everyone start. Yes, they want to tie you into their other services, but their utility may be the difference between starting or not. The trick with using any highly integrated services from third-party companies is to make sure you have defined your architecture before you start, so that Microsoft (or whoever) doesn’t redefine your project to suit its tooling. Physical dependency is simple to change, mental dependency is a bit harder to shift.

If you are programming in the open, you will want to use git with Github for your central code repository. But in most cases, you will want to run private repositories with one of the many central repository services. If you know you will produce lots of slow-changing artifacts, then you may need an artifact repository (or DockerHub), and if you are dealing with lots of large files and non-text files (such as large images) then you may need to avoid git altogether and use something like PlasticSCM (which is now within Unity).

Setting up CI/CD

An example CI/CD pipeline; via dev.to

(Unless you are writing Go, don’t expect to see any blue gophers near your screen)

The center of your project will always be the build pipeline — the heart of Continuous Integration/Continuous Deployment (CI/CD). Simply put, you need to create a build from the appropriate code branch of your product or service and deploy it to one or more environments from a single signal. In other words, automate the deployment. This isn’t something you need immediately, but don’t do anything early on to prevent automation later.

Teams still use the open source favorite Jenkins to check out, build and deploy, but there are many other options. If you keep an eye on maintaining the configuration files that work for you, then changing the pipeline shouldn’t be too painful.

Once a basic build automation is in place, you can slot in other services — like code coverage and security testing.

Conclusion

So you’ve defined your project, worked out what good looks like, described what you think the components and processes should be, figured out the infrastructure, got the roles sorted out, checked in the first MVP and cranked the handle on the pipeline.

The important thing about projects is not how they start (no one will remember if all goes well), but how well they are maintained through their lifecycle. And yes, you also need to know when and how to retire them.

The post How to Start a Software Project: A Guide for Junior Devs appeared first on The New Stack.

]]>
Dev News: Trouble in npm, Vue 3.3 and Cloudflare Updates https://thenewstack.io/dev-news-trouble-in-npm-vue-3-3-and-cloudflare-updates/ Sat, 20 May 2023 12:00:13 +0000 https://thenewstack.io/?p=22708599

ReversingLabs researchers revealed Thursday that two malicious packages lived on npm for two months before being detected. “The presence of

The post Dev News: Trouble in npm, Vue 3.3 and Cloudflare Updates appeared first on The New Stack.

]]>

ReversingLabs researchers revealed Thursday that two malicious packages lived on npm for two months before being detected.

“The presence of such suspicious characteristics and behaviors first caused the npm package nodejs-encrypt-agent to come to our attention,” wrote Lucija Valentić, a software threat researcher at ReversingLabs. “First published more than two months ago, nodejs-encrypt-agent appears at first glance to be a legitimate package. However, discrepancies raised red flags with our researchers.”

Npm is a widely used JavaScript package monitor.

Researchers detected an open source info stealer called TurkoRat, after noting several red flags in the files, including that the package name differed from the name listed in the readme.md file and versioning number irregularities in the npm package nodejs-encrypt-agent. At first, they dismissed the findings, thinking npm administrators would have recognized if the package was malicious. But the researchers decided to analyze the packages using its Software Supply Chain Security solution.

“When we looked inside the nodejs-encrypt-agent, we found that the code and functionality mirrored the agent-base package it was squatting on. That is to be expected,” Valentić stated. “There was, however, a small, but very significant difference: The nodejs-encrypt-agent package contained a portable executable (PE) file that, when analyzed by ReversingLabs was found to be malicious.”

The code was also found in a few nodejs-cookie-proxy-agent packages. The researcher team noted that exposure was limited, with the nodejs-encrypt-agent downloaded about 500 times and the nodejs-cookie-proxy-agent downloaded less than 700 times.

“Still, the malicious packages were almost certainly responsible for the malicious TurkoRat being run on an unknown number of developer machines. The longer-term impact of that compromise is difficult to measure,” Valentić noted.

The PE file executes almost immediately after the package runs, enacting malicious commands hidden in the first few lines of the index.js file, the researchers found.

Among the bad behaviors identified in the PE component are the ability to:

  • Write and delete from Windows system directories;
  • Execute commands; and
  • Tamper with domain name system settings.

“TurkoRat is just one of many open source malware families that are offered for ‘testing’ purposes, but can readily be downloaded and modified for malicious use, as well,” Valentić stated. “TurkoRat’s author clearly anticipates this, as he provides instructions on how to use malicious code, while stating that he is ‘not responsible for any damages this software may cause and that it was only made for personal education.’”

This is not the first time npm has made news for harboring malicious code.

“When using packages from public repositories in their projects, developers should keep an eye peeled for these small, but telling details to avoid a malicious package being introduced as a dependency in some larger project,” Valentić advised.

Vue 3.3 Focuses on Developer Experience with TypeScript

Vue 3.3 is now available, with the new release focused on developer experience. Specifically, it changed the SFC <script setup> usage with TypeScript, according to the Vue team.

The compiler can now resolve imported types and supports a limited set of complex types, which means types used in the type parameter position are no longer limited to local types and support more than type literals and interfaces, the team explained in this blog post.

Components using <script setup> can now accept generic type parameters via the generic attribute. Also in this upgrade:

  • More ergonomic defineEmits
  • Typed slots with defineSlots

It also introduces some experimental features, including reactive props destructure, which allows destructured props to retain reactivity and provided a more ergonomic way to declare props default values. Vue 3.3 also simplifies the usage of two-way binding with v-model via a new defineModel macro. Since both are experimental, they require an explicit opt-in, the team wrote.

It also addresses JSX import source support.

“Currently, Vue’s types automatically registers global JSX typing. This may cause conflict with used together with other libraries that needs JSX type inference, in particular React,” the post noted. “Starting in 3.3, Vue supports specifying JSX namespace via TypeScript’s jsxImportSource option. This allows the users to choose global or per-file opt-in based on their use case.”

It still registers JSX namespace globally to support backward compatibility — but be forewarned, the plan is to remove the default global registration in version 3.4. The blog post hits the highlights, but a complete list of changes is available on GitHub.

CDN CloudFlare Adds Next.js, Angular, other Adapters

Content delivery network Cloudflare released a slew of developer-focused offerings during its Developer Week, including AI and support for more JavaScript frameworks.

An AI assistant, named Cursor, has been trained to answer questions about Cloudflare’s Developer Platform. This blog post outlines how Cloudflare sees AI evolving to fit developers’ needs, but Cursor will first be used as an addition to Cloudflare’s documentation to help developers get answers as quickly as possible. When asked a question, Cursor will provide a text-based response and links to relevant pages in the documentation.

Also on the AI front, it also introduced Constellation, which allows developers to run pre-trained machine learning models and inference tasks on Cloudflare’s network.

Cloudflare also announced it had new or improved adapters for Next.js, Angular, Qwik, Astro, Nuxt and Solid.

Finally, the company announced an improved Quick Edit in Cloudflare Workers and Wrangler v3. Quick Edit is embedded within the Cloudflare dashboard and is “the fastest way to get up and running with a new worker,” Cloudflare said. It allows developers to preview and deploy changes to code. Wrangler 3 provides developers with an easy-to-debug local testing environment.

The post Dev News: Trouble in npm, Vue 3.3 and Cloudflare Updates appeared first on The New Stack.

]]>
Generative AI Thread Runs Through Google’s New Products https://thenewstack.io/generative-ai-thread-runs-through-googles-new-products/ Fri, 19 May 2023 19:37:31 +0000 https://thenewstack.io/?p=22708643

MOUNTAIN VIEW, Calif. — The recent Google I/O 2023, edition 16, was as newsy as it’s ever been, and it’s

The post Generative AI Thread Runs Through Google’s New Products appeared first on The New Stack.

]]>

MOUNTAIN VIEW, Calif. — The recent Google I/O 2023, edition 16, was as newsy as it’s ever been, and it’s always one of the newsiest IT conferences in the world each year.

A long list of new products and software updates were introduced for both consumer and enterprise markets, including:

  • The Pixel 7a, a new mid-range smartphone with a 6.1-inch OLED display, a 12.2MP main camera, and a 12MP ultrawide camera.
  • The Pixel Fold, a new foldable smartphone with a 7.6-inch OLED display and a 5.8-inch cover display.
  • The Pixel Tablet, a new Android tablet with a 10.3-inch OLED display.
  • Updates to Android 13, including new features such as Material You, improved privacy controls, and new gaming features.
  • Updates to Google Workspace, including new features such as improved collaboration tools and new AI-powered features.
  • Updates to Google Cloud, including new features such as improved performance and security.
  • Updates to Google AI, including new research projects and new tools for developers.

Richard MacManus detailed Google’s new and updated AI developer tools in his May 10 article.

AI in All the Company’s Products

The prevailing thread woven through the two-day event was that generative AI is finding a place in virtually all of the company’s products in order to make them more efficient, user-friendly and secure, CEO Sundar Pichai told the Shoreline Amphitheatre audience on a cloudy day in the Bay Area. This follows a general trend in the IT industry.

“We are reimagining all our core products, including Search,” Pichai said. He offered examples of how generative AI is improving other Google products, including Workspace, Gmail, Maps, Photos, and a list of others.

“Looking ahead, we’re making AI helpful for everyone,” Pichai said. “It’s the most profound way we will advance our mission. And we are doing this in four important ways: First, by improving your knowledge and learning and deepening your understanding of the world. Second, by boosting creativity and productivity so you can express yourself and get things done. Third, by enabling developers and businesses to build their own transformative products and services. Finally, by building and deploying AI responsibly, everyone can benefit equally. Our ability to make AI helpful for everyone relies on continuously advancing our foundation models.”

Google has made its new Enterprise Search product available to a limited audience at this time. To get on the waitlist, go to the Google Search Labs homepage, where you will be given the option to sign up.

The new Search combines several standard Google features, including Photos, Maps and product recommendations from vendors and organizations. A use case given as an example involved whether a family of four and their dog would have more fun at Bryce Canyon or Arches national parks on a summer vacation; the Search app brought up photos, videos, maps and recommendations from the National Park Service, for example, so the family could make a decision on child- and dog-friendliness of both locations. It only took a few minutes for the family to decide that while both parks were kid-friendly, there were more options for the dog at Bryce Canyon — so that’s where they went.

More Highlights

Here are more highlights of a busy conference.

Google Cloud

Updates include:

  • Google Kubernetes Engine (GKE) 1.22: GKE 1.22 is the latest version of Google’s managed Kubernetes service. It includes a number of new features and improvements, such as support for IPv6, enhanced security and improved performance.
  • Cloud Spanner 2.1: Cloud Spanner 2.1 is the latest version of Google’s fully managed, mission-critical, relational database service. It includes a number of new features and improvements, such as support for JSON data types, improved performance and updated security.
  • Cloud Bigtable 2.0: Cloud Bigtable 2.0 is the latest version of Google’s scalable, durable, and highly available NoSQL database service. It includes a number of new features and improvements, such as support for secondary indexes, improved performance, and enhanced security.
  • Cloud Dataproc 2.0: Cloud Dataproc 2.0 is the latest version of Google’s managed Hadoop and Spark service. It includes a number of new features and improvements, such as support for TensorFlow, improved performance, and enhanced security.
  • Cloud Dataflow 2.0: Cloud Dataflow 2.0 is the latest version of Google’s fully managed, serverless Apache Beam service. It includes a number of new features and improvements, such as support for streaming data, improved performance, and enhanced security.

Google AI

Updates include:

  • LaMDA: LaMDA is a new language model from Google AI that is capable of generating human-quality text. LaMDA can be used to create a variety of content, including articles, blog posts and even poems.
  • PaLM: PaLM is a new AI system that is capable of performing a variety of tasks, including writing different kinds of creative content, translating languages, writing different kinds of creative content and answering questions in an informative way. PaLM is still under development, but Pichai said it has learned to perform many kinds of tasks, including:
    • “I will try my best to follow your instructions and complete your requests thoughtfully.”
    • “I will use my knowledge to answer your questions in a comprehensive and informative way, even if they are open ended, challenging, or strange.”
    • “I will generate different creative text formats of text content, like poems, code, scripts, musical pieces, email, letters, etc. I will try my best to fulfill all your requirements.”
  • Imagen: Imagen is a new AI system from Google AI that is capable of generating realistic images from text descriptions. Imagen can be used to create a variety of images, including landscapes, portraits, and even scenes from movies.
  • MusicLM: MusicLM is capable of generating a variety of music, including songs, pieces of classical music, and even jazz improvisations, Pichai said.

Google Workspace

Updates include:

  • Improved spam protection in Google Drive: Spam protection in Google Drive has been improved by adding new filters and machine learning algorithms. This will help to keep your Drive safe from unwanted messages and attachments.
  • Greater visibility with additional Google Calendar statuses in Google Chat: You can now see more information about people’s availability in Google Chat. This includes whether they are available, busy, or out of office.
  • The ability to quote a previous message in Google Chat: You can now quote a previous message in Google Chat. It is now easier to keep track of conversations and make sure that everyone is on the same page.
  • The ability to expand upon Gmail security with BIMI: BIMI (Brand Indicators for Message Identification) is a new standard that allows organizations to display their logos in email messages. This helps to increase email security and to make it easier for users to identify legitimate emails from phishing attempts.
  • Google Meet now supports full HD (1080p) video for live streams. This makes it possible to have high-quality video calls with large groups of people.

Pixel 7a Smartphone

The Pixel 7a is a new, thinner-than-its-predecessor mid-range phone that offers many features for a relatively low price ($499). It is powered by the Google Tensor G2 processor, which is the same processor that powers the Pixel 7 and Pixel 7 Pro. It has a 6.1-inch OLED display with a resolution of 1080 x 2400 pixels.

The Pixel 7a has a triple-lens rear camera system that includes a 64-megapixel main sensor, a 12-megapixel ultrawide sensor, and a 2-megapixel macro sensor. It has an 8-megapixel front-facing camera.

The Pixel 7a runs Android 13, the latest version of Google’s mobile operating system. Android 13 is a major update that brings a number of new features and improvements, including a new design, improved performance, and new privacy features.

Pixel Fold

The Pixel Fold is the first foldable phone from Google. It has a 7.6-inch AMOLED display when unfolded, and a 5.8-inch AMOLED display when folded. It is powered by the Google Tensor G2 processor and has 12GB of RAM and 512GB of storage. The Pixel Fold has a triple-lens rear camera system with a 50-megapixel main sensor, a 12-megapixel ultrawide sensor and a 48-megapixel telephoto sensor. It also has an 8-megapixel front-facing camera. The Pixel Fold runs Android 13; pricing starts at $1,799.

Pixel Tablet

The Pixel Tablet is the first tablet from Google since the Pixel Slate in 2018. The Pixel Tablet has an 11-inch OLED display, is powered by the Google Tensor G2 processor and has 8GB of RAM and 128GB of storage. It has a 12.2-megapixel rear camera and an 8-megapixel front-facing camera. The Pixel Tablet runs Android 13; pricing starts at $499.

Additional details:

  • It has a metal body with a textured finish.
  • It has four speakers for immersive sound.
  • It has a long-lasting battery that can provide up to 12 hours of video playback.
  • It supports the Google Pen for taking notes and drawing.
  • It comes with a charging dock that can also be used as a stand.

New Android 13 Features

Updates to Android 13

Android 13, released in August 2022, is available on all Pixel devices, as well as a number of other Android devices from other manufacturers. Android 13 brings a number of new features and improvements, including:

  • Material You: Material You is a new design language that allows users to customize the look and feel of their device to match their personal style. Users can choose from a variety of colors and themes, and the system will automatically adjust the look of the UI to match the user’s choice.
  • Privacy features: Android 13 includes a number of new privacy features, such as a privacy dashboard that gives users a clear view of how their data is being used by apps and a microphone and camera indicator that shows when an app is using either of these sensors. Users can also revoke permissions from apps that they no longer need.
  • Productivity features: Android 13 includes a number of new productivity features, such as a new multitasking menu that makes it easier to switch between apps and windows, and a new way to copy and paste text and images between devices. Users can also create a work profile on their personal devices to keep their work and personal life separate.
  • Gaming features: Android 13 includes a number of new gaming features, such as a new game dashboard that makes it easier to find and launch games, and a new way to record and share gameplay footage. Users can also use the Google Play Pass to access a library of over 100 premium games for free.

Google I/O 2023 was held virtually and in-person at Shoreline Amphitheatre in Mountain View, Calif. The in-person event was limited to a small number of attendees, but the virtual event was open to everyone. Google has not released the exact number of people who attended the in-person event, but it is estimated that there were between 1,000 and 2,000 people in attendance. The virtual event had more than 60,000 registrants, and about 2 million people watched the keynote live stream.

The post Generative AI Thread Runs Through Google’s New Products appeared first on The New Stack.

]]>
AI Improves Developer Workflow, Says Gradle Dev Evangelist https://thenewstack.io/ai-improves-developer-workflow-says-gradle-dev-evangelist/ Fri, 19 May 2023 17:00:57 +0000 https://thenewstack.io/?p=22708592

Developer tools are scrambling to integrate AI into their products, no matter which part of the developer workflow they cater

The post AI Improves Developer Workflow, Says Gradle Dev Evangelist appeared first on The New Stack.

]]>

Developer tools are scrambling to integrate AI into their products, no matter which part of the developer workflow they cater to. One example is Gradle Build Tool, an open source build automation tool that has been around for fifteen years now. The company behind it, Gradle Inc, has been paying particular attention to AI, since it will fundamentally change the concept it coined: Developer Productivity Engineering (DPE).

I spoke with Trisha Gee, the lead developer evangelist at Gradle, about how AI is impacting the developer workflow. Prior to joining Gradle at the beginning of this year, Gee had over two decades of experience as a developer — mostly focusing on Java.

AI Is Additive for Devs

Gee says that her view on AI for developers has evolved rapidly. Similar to other senior devs I know, she initially dismissed AI’s significance. However, she has since recognized it as a valuable tool for developers.

She now thinks of AI as an addition to the developer’s toolkit, rather than a replacement for them. Trisha compares the evolution of AI tools to the advent of internet search engines like Google back in the 1990s, which quickly became indispensable for developers when troubleshooting problems. Just as using Google and Stack Overflow has made coding more efficient, she thinks leveraging AI tools to generate code and seek answers to specific questions will be the same.

Gee emphasized, though, that developers must still rely on their own expertise and experience to filter AI-generated code and apply it appropriately within their codebase. She believes that AI can accelerate development by reducing the time spent on repetitive tasks — like writing boilerplate code — and enabling developers to focus on the bigger picture, such as ensuring the code meets business requirements.

How ML is Used in Testing

As well as AI code generation, machine learning is used in products like Gradle Enterprise, which aims to save developers’ time by automating time-consuming tasks and eliminating wasteful activities.

For instance, Gradle Enterprise offers features like “predictive test selection,” which uses machine learning to run tests impacted by code changes, instead of running the entire test suite. This approach improves efficiency by focusing on relevant areas, Gee said.

I asked whether there was a big impact on tools like Gradle because of the potential errors output by code generation tools like GitHub Copilot?

She replied that, yes, having tools that generate code means there is a need for effective testing to validate the generated code, which is where Gradle comes in. She highlighted the significance of running tests quickly and efficiently, identifying failures, and avoiding repetitive failures across teams that are using code generation tools. She added that Gradle Enterprise can contribute to developer productivity by automating aspects of the testing process, similar to how code generation automates code creation.

The goal is not to replace developers’ work but rather to alleviate them from mundane tasks, she said, allowing devs to focus on the business problem at hand, ensuring the tests are meaningful, and verifying that everything operates as expected.

Gee added that Gradle Enterprise also utilizes machine learning for tasks like gathering data on builds, tests, and the environments they run on. This data-rich context presents opportunities for leveraging AI and machine learning techniques, she said.

Career Development in AI Era for Young Devs

Given her experience in the industry, I wondered if Gee had any advice for young developers entering the industry currently, when AI is both a potential boon and (perhaps) an existential threat to developer careers.

Gee highlighted the importance of being adaptable and having a willingness to continuously learn. While there may be new skills to acquire, she said, it is not a major problem as long as developers possess the ability to learn and adapt.

She mentions git as being another example of a new type of skill that developers quickly had to adapt to, when it first came out.

“​​10 years ago, 15 years ago, when I was doing a lot of Java user group stuff with graduates in London, a lot of the graduates were panicking because they came out of university without understanding git,” she said. “And it’s a gap in their technical skill set, sure, but it’s a gap that you learn [to fill] on the job. You don’t need to understand everything about git during your training process. You learn that on the job, you see how other developers are using it, you understand what’s required of you in your team, in your business.”

Ultimately, she thinks that the learning process for new developers will involve acquiring new skills related to AI, similar to how they learn other skills — like using search engines or writing automated tests. So she sees AI as a natural part of the learning journey, rather than a significant shift in the skills required for a career in development.

Don’t Fear AI

Overall, Gee cautions against fear and fear-mongering about AI replacing developers’ jobs. She compares the use of AI tools to code generation features in IDEs, which were initially met with skepticism but are now widely embraced for their ability to make developers’ jobs easier. AI tools can be similarly helpful, she believes.

She added that she herself has used ChatGPT in development work, for thought organization and problem-solving. So it has already been a positive tool in her own job.

The post AI Improves Developer Workflow, Says Gradle Dev Evangelist appeared first on The New Stack.

]]>
Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ https://thenewstack.io/dev-news-dart-3-meets-wasm-flutter-3-10-and-qwik-streamable-javascript/ Sat, 13 May 2023 16:00:58 +0000 https://thenewstack.io/?p=22708063

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>

Google released Dart 3 this week, with the big news being it is now a 100% sound null-safe language and the first preview of Dart to WebAssembly compilation.

“With 100% null safety in Dart, we have a sound type system,” wrote Michael Thomsen, the product manager working on Dart and Flutter. “You can trust that if a type says a value isn’t null, then it never can be null. This avoids certain classes of coding errors, such as null pointer exceptions. It also allows our compilers and runtimes to optimize code in ways it couldn’t without null safety.”

The trade-off, he acknowledged, is that migrations became a bit harder. However, 99% of the top 1000 packages on pub.dev support null safety, so Google expects the “vast majority of packages and apps that have been migrated to null safety” will work with Dart 3. For those who do experience problems using the Dart 3 SDK, there’s a Dart 3 migration guide.

Thomsen also announced a first preview of Dart to WebAssembly compilation. Flutter, which is written in Dart, already uses Wasm, he added.

“We’ve long had an interest in using Wasm to deploy Dart code too, but we’ve been blocked. Dart, like many other object-oriented languages, uses garbage collection,” he wrote. “Over the past year, we’ve collaborated with several teams across the Wasm ecosystem to add a new WasmGC feature to the WebAssembly standard. This is now near-stable in the Chromium and Firefox browsers.”

Compiling Dart to Wasm modules will help achieve high-level goals for web apps, including faster load times; better performance because Wasm modules are low-level and closer to machine code; and semantic consistency.

“For example, Dart web currently differs in how numbers are represented,” he wrote. “With Wasm modules, we’d be able to treat the web like a ‘native’ platform with semantics similar to other native targets.”

Also in Dart 3, Google added records, patterns and modifiers. The language quest for multiple return values was Dart’s fourth highest-rated issue, and by adding records, developers can “build up structured data with nice and crisp syntax,” Thomsen noted.

“In Dart, records are a general feature,” he stated. “They can be used for more than function return values. You also store them in variables, put them into a list, use them as keys in a map, or create records containing other records.”

Records simplify how you build up structured data, he continued, while not replacing using classes for more formal type hierarchies.

Patterns come into play when developers might want to break that structured data into its individual elements to work with them. Patterns shine when used in a switch statement, he explained. While Dart has had limited support for switch, in Dart 3, they’ve broadened the power and expressiveness of the switch statement.

“We now support pattern matching in these cases. We’ve removed the need for adding a break at the end of each case. We also support logical operators to combine cases,” he wrote.

Google also added class modifiers for fine-grained access control for classes.

“Unlike records and patterns that we expect every Dart developer to use, this is more of a power-user feature. It addresses the needs of Dart developers crafting large API surfaces or building enterprise-class apps,” Thomsen stated. “Class modifiers enable API authors to support only a specific set of capabilities. The defaults remain unchanged though. We want Dart to remain simple and approachable.”

Flutter v3.10 Released

Since Flutter is built on Dart, and Dart 3 launched this week, it’s not surprising that Google also launched Flutter version 3.10 at its Google I/O event Wednesday. It was buried in the slew of news announcements, but fortunately, more details were available in a blog post by Kevin Chisholm, Google’s technical program manager for Dart and Flutter.

Flutter 3.10 includes improvements to web, mobile, graphics and security. The framework now compiles with Supply Chain Levels for Software Artifacts (SLSA) Level 1, which adds more security features such:

  • Scripted build process, which now allows for automated builds on trusted build platforms;
  • Multi-party approval with audit logging, in which all executions create auditable log records; and
  • Provenance, with each release publishing links to view and verify provenance on the SDK archive.

This is also the first step toward SLA L2 and L3 compliance, which focus on protecting artifacts during and after the build process, Chisholm explained.

When it comes to the web, there are a number of new changes, including improved load times for web apps because the release reduces the file size of icon fonts and pruned unused glyphs from Material and Cupertino. Also reduced in size: the CanvasKit for all browsers, which should further improve performance.

It also now supports element embedding, which means developers can serve Flutter web apps from a specific element in a page. Previously, apps could either take up the entire page or display within an iframe tag.

The engine Impeller on iOS was tested in the 3.7 stable release, but with v3.10 it’s now set as the default renderer on iOS, which should translate into “less bank and better consistent performance,” Chisholm wrote. Actually, eliminating jank is a big part of this release: Chisholm thanks open source contributor luckysmg, who discovered that it was possible to slash the time to get the next drawable layer from the Metal drive.

“To get that bonus, you need to set the FlutterViews background color to a non-nil value,” he explained. “This change eliminates low frame rates on recent iOS 120Hz displays. In some cases, it triples the frame rate. This helped us close over half a dozen GitHub issues. This change held such significance that we backported a hotfix into the 3.7 release.”

Among the other lengthy list of improvements are the ability to decode APNG images, improved image loading APIs and support for wireless debugging.

Quick v1.0: A Full-Stack Framework with ‘Streaming JavaScript’

Qwik, a full-stack web framework, reached version 1.0 this week, with the Quick team promising a “fundamentally new approach to delivering instant apps at scale.”

The open source JavaScript framework draws inspiration from React, Cue, Angular, Svelte, SolidJS and their meta frameworks — think Next.js, Nuxt, SvelteKit — according to the post announcing the new release. Qwik promises to provide the same strengths as these frameworks while adapting for scalability.

“As web applications get large, their startup performance degrades because current frameworks send too much JavaScript to the client. Keeping the initial bundle size small is a never-ending battle that’s no fun, and we usually lose,” the Qwik team wrote. “Qwik delivers instant applications to the user. This is achieved by keeping the initial JavaScript cost constant, even as your application grows in complexity. Qwik then delivers only the JavaScript for the specific user interaction.”

The result is that the JavaScript doesn’t “overwhelm” the browser even as the app becomes larger. It’s like streaming for JavaScript, they added.

To that end, Qwik solves for instant loading time with JavaScript streaming, speculative code fetching, lazy execution, optimized rendering time and data fetching, to name a few of the benefits listed in the post.

It also incorporates ready-to-use integrations with poplar libraries and frameworks, the post noted. Qwik also includes adapters for Azure, Cloudflare, Google Cloud Run, Netlify, Node.js, Deno and Vercel.

The post Dev News: Dart 3 Meets Wasm, Flutter 3.10, and Qwik ‘Streamable JavaScript’ appeared first on The New Stack.

]]>
Tech Byte: Meta Backs the OpenJS Foundation https://thenewstack.io/meta-backs-the-openjs-foundation-for-greater-diversity/ Thu, 11 May 2023 15:00:15 +0000 https://thenewstack.io/?p=22707774

Meta, the creator of popular open source projects like React, React-Native, and Jest, has joined the OpenJS Foundation, which provides

The post Tech Byte: Meta Backs the OpenJS Foundation appeared first on The New Stack.

]]>

Meta, the creator of popular open source projects like React, React-Native, and Jest, has joined the OpenJS Foundation, which provides vendor-neutral support for the open source JavaScript community.

As a gold member, Meta will be working with the foundation to promote diversity, equity, and inclusion within the community.

Robin Ginn, OpenJS Foundation Executive Director, said in a statement, “Welcome Meta! Their positive effect on the JavaScript ecosystem has been amazing. Heavy users at the scale of JavaScript itself, creators of React and React-Native, creators of multiple key open source projects. We look forward to working more with Meta’s leadership and expertise to increase support for the diverse open source communities at OpenJS.”

Meta Open Source has contributed significantly to the JavaScript ecosystem by creating and open sourcing many projects crucial to its growth, such as React, Jest, and Flow. Last year, Meta contributed its popular JavaScript testing project Jest to OpenJS, which received an enthusiastic response from developers for this community-led project.

Working collectively with other member companies and with the guidance of the OpenJS Foundation, Meta will continue to contribute and advocate in the community.

Meta’s decision to join the OpenJS Foundation [could] have a positive impact on the open source JavaScript community. With the foundation providing support for sustained growth and fostering inclusivity, Meta’s contribution and leadership [could] go a long way in improving the ecosystem.

The OpenJS Foundation is committed to supporting the healthy growth of the JavaScript ecosystem and web technologies by providing a neutral organization to host and sustain projects, as well as collaboratively fund activities for the benefit of the community at large.

The OpenJS Foundation is made up of 41 open source JavaScript projects, including Appium, Dojo, Jest, jQuery, Node.js, and webpack, and is supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Joyent, Microsoft, and Netflix.

These members recognize the interconnected nature of the JavaScript ecosystem and the importance of providing a central home for projects that represent significant shared value, according to the Linux Foundation.

All AI-generated posts on The New Stack are reviewed by an editor before going live. 

ChatGPT prompt: “Write a 500-word news story based only on the press release pasted below. Minimize any marketing jargon while emphasizing technical details.” Edited for further brevity.

Press Release: “Meta Joins the OpenJS Foundation,” Linux Foundation.

The post Tech Byte: Meta Backs the OpenJS Foundation appeared first on The New Stack.

]]>
Google’s New TensorFlow Tools and Approach to Fine-Tuning ML https://thenewstack.io/googles-new-tensorflow-tools-and-approach-to-fine-tuning-ml/ Wed, 10 May 2023 20:00:46 +0000 https://thenewstack.io/?p=22707785

Today at Google I/O, the web giant’s annual developer conference, Google announced a bunch of new AI tools — including

The post Google’s New TensorFlow Tools and Approach to Fine-Tuning ML appeared first on The New Stack.

]]>

Today at Google I/O, the web giant’s annual developer conference, Google announced a bunch of new AI tools — including new tooling for the TensorFlow ecosystem, a new one-stop shop for developers called ML Hub, and upgrades to its cross-platform set of ML solutions called MediaPipe.

Ahead of the announcements, I conducted an email interview with Alex Spinelli, Vice President of Product Management for Machine Learning at Google.

The new tools for TensorFlow include KerasCV and KerasNLP (allowing developers access to new “pre-trained” models), DTensor (for scaling via parallelism techniques), JAX2TF (a lightweight API for the JAX numerical framework), and the TF Quantization API (which is “coming soon,” but will allow developers to build models that are “cost and resource efficient”).

The KerasCV + KerasNLP suite

The KerasCV + KerasNLP suite (image via Google)

State of Google’s LLMs

I asked Spinelli whether developers will be able to use any of the above tools on Google’s large language models (LLMs)?

“In March, we announced that developers who are experimenting with AI can build on top of our language models using the PaLM API,” he replied. “As part of that announcement, we made an efficient model of PaLM available, in terms of size and capabilities, and we’ll add other sizes soon. The API also comes with an intuitive tool called MakerSuite, which lets developers quickly prototype ideas and, over time, will have features for prompt engineering, synthetic data generation and custom-model tuning — all supported by robust safety tools.”

Spinelli added that at I/O, Google will be opening up a “private preview” of the PaLM API, “so more developers can prototype directly on the web with MakerSuite or with the tools they know and love, with integrations in Firebase and Colab.”

Why Use TensorFlow and Not LLMs

PaLM is Google’s biggest LLM, at 540 billion parameters, but it has a few other LLMs listed on the Stanford HELM index: Flan-T5 (11B), UL2 (20B), and T5 (11B). I asked Spinelli why a developer might want to use ML models via TensorFlow instead of Google’s LLMs. In other words, are there specific use cases that are best for TensorFlow?

He replied with three different use cases for TensorFlow ML:

  1. A developer wants to build their own model;
  2. A developer can solve a problem by using someone else’s model — either directly, or by fine-tuning it; and
  3. A developer can solve a problem by using a hosted large model — be it language, images, or a multi-modal combination of both.

On the first use case, Spinelli said a combo of TensorFlow and Keras (a software library with a Python interface that interfaces with the TensorFlow library) was the best choice to build your own model. “They make it easy for you to define model architecture and train on your own data,” he said.

TensorFlow and Keras are also the best choice when using someone else’s model, Spinelli told me.

“Many models (see Kaggle Models or tfhub.dev) have been created by other developers with extension via Transfer Learning in mind,” he continued. “TF [TensorFlow] makes it super simple for you to do this to — for example — take a model that’s great at recognizing generic images, and retrain it to be excellent at spotting specific, particular, images; like diseases on an X-Ray.”

As for using a hosted large model, Spinelli said that “We’re working to extend TF and Keras to make their high-level APIs useful for developers to access existing large-language or other generative models.”

Fine-Tuning in Google’s Models

There is mention of devs being able to train models with the new tools, but no mention of fine-tuning. TensorFlow’s own documentation defines fine-tuning as training “the weights of the top layers of the pre-trained model alongside the training of the classifier you added.”

Fine-tuning is something that Meta offers with its LLaMA model, but no other big LLM currently offers access to the weights. So I asked Spinelli if there is anything in the new tools that will help devs with this fine-tuning.

“In its strictest sense, fine-tuning involves creating an entirely new instance of a model, but with some parts retrained for one’s specific scenario,” he replied. “However, when dealing with LLMs, you don’t usually do that, with the exception that you noted [LLaMA], because of the storage and costs involved.”

Spinelli claims that developers can get the same overall effect of fine-tuning using what he called “prompt tuning” or “parameter efficient tuning” [PET]. He said that both can be done with MakerSuite. “You can also prompt tune and P.E.T. programmatically with the PaLM API,” he added.

With all that said, Spinelli noted there will be one exception to the “prompt tune” and PET approaches. With Cloud AI (part of the Google Cloud suite), he said, “you can fine-tune our code-generation model with your own codebase, and you’ll get a private VPC with that instance of our codegen model that you can use to generate code that is aware of your particular codebase as well as our general purpose one.”

An ML Hub

With all these new product announcements, Google clearly wants to become a hub for ML developers — similar to how it caters to web developers with regular browser, SEO and other web platform updates. The new front page for ML developers, ML Hub, is being positioned as a kind of portal to “enable developers to build bespoke ML solutions.” It will likely be similar to web.dev, Google’s front page for web developers.

Indeed, like Google’s web development tooling, there is something for everyone in Google’s newly expanded ML toolset — including ways to access those much larger, and trendier, generative models.

The post Google’s New TensorFlow Tools and Approach to Fine-Tuning ML appeared first on The New Stack.

]]>
The New JavaScript Features Coming in ECMAScript 2023 https://thenewstack.io/the-new-javascript-features-coming-in-ecmascript-2023/ Tue, 09 May 2023 13:00:08 +0000 https://thenewstack.io/?p=22707415

This year’s annual update to ECMAScript, which formally standardizes the JavaScript language, will be approved in July 2023, but four

The post The New JavaScript Features Coming in ECMAScript 2023 appeared first on The New Stack.

]]>

This year’s annual update to ECMAScript, which formally standardizes the JavaScript language, will be approved in July 2023, but four proposals for new language features have already reached stage four. This means they’ve been signed off by the editors of ECMAScript in the TC39 working group that manages the language standard, have passed the test suite, and shipped in at least two implementations to check for real-world performance and issues.

Small but Helpful

Symbols as WeakMap keys fills in a small gap in the language, explained Daniel Ehrenberg, vice president of Ecma (the parent organisation of TC39) and a software engineer working on JavaScript developer experience at Bloomberg, who worked on the proposal. Introduced in ECMAScript 2015, WeakMap lets you extend an object with extra properties (for example, to keep track of how often the object is used) without worrying about creating a memory leak, because the key-value pairs in a WeakMap can be garbage collected.

Initially, you could only use objects as keys in a WeakMap, but you want the keys to be unique “and symbols were defined as a new immutable way, that cannot be recreated, so having those as a unique key in the weak map makes a lot more sense”, developer advocate and browser engineer Chris Heilmann told us. This integrates symbols more with these new data structures and might well increase usage of them.

Two of the proposals improve working with arrays, which he notes are becoming increasingly powerful in JavaScript, avoiding the need to write functions and loop over data to process it.

“Now you can do a filter or map, and just have a one-liner for something that was super complex in the past.”

Change Array by Copy gives developers new methods for sorting, reversing and overwriting data without mutating the array it’s stored in. “You’ve always been able to sort arrays, but when you call a sort function it would change the current array; and in functional programming and the functional patterns that have become very popular [in JavaScript], people like to avoid mutations,” TC39 co-chair and head of Bloomberg’s JavaScript Infrastructure and Tooling team Rob Palmer explained.

This proposal lets developers call a method to change a single element in the array, using with or splice, and get a new array with that single change — or sort and reverse an array into a fresh array but leave the original array unmodified. This is simpler for developers because it makes array and tuple behavior more consistent, Heilmann pointed out. “The inconsistency between whether array prototypes change the original array or not is something that drove me nuts in PHP. You do a conversion, send it to a variable and then there’s nothing in the variable, because some functions don’t change the original one and others do change it. Any consistency we can bring to things so people don’t have to look it up every day is a very good idea. And anything that allows me to not have to deal with index numbers and shift them around is also a good thing!”

Array find from last also does exactly what the name suggests, returning matching elements in an array starting at the end and working back, which can improve performance — or save writing extra code. “If you have a huge array, it’s really beneficial because you don’t have to look through the whole thing or reverse it before you look it up, so you don’t have to make a temporary duplicate — which developers do all the time,” Heilmann explained.

Most comments in JavaScript are there for developers working in the source code, to document how it works or record why it was written that way. Hashbang comments, which start with #!, are for specifying the path to the JavaScript interpreter that you want to use to run the script or module that follows the comment (a convention inherited from UNIX script files). CLI JavaScript hosts like Node.js already strip the hashbang out and pass the valid code onto the JavaScript engine, but putting this in the standard moves that responsibility to the JavaScript engine and makes sure it’s done in a uniform way.

Making hashbang grammar official in JavaScript gives it more consistency with the rest of the languages out there, he noted.

While serverside JavaScript is far from new, he said, “it feels to me like JavaScript has finally arrived as a serverside language with this, because when I think of Perl or PHP or all the other languages, you always have the hashbang.”

Although it’s another small change, it’s possible this will make it easier for JavaScript to participate in the AI and machine learning ecosystem, where Python is currently the dominant language.

Larger Proposals

These four proposals are very likely to be everything we see in ECMAScript 2023, which Ehrenberg noted is a small update, but there are also some important larger proposals that have already reached stage three (which means the spec has been agreed, but can’t be developed further without a full test suite and the real world experience of shipping the feature in at least two implementations).

Reaching stage three isn’t a guarantee that a feature will make it into the standard (because the implantations can reveal that changes need to be made). But iterator helpers, Temporal, explicit resource management and decorators are all stage three proposals making good progress that could be on track for ECMAScript 2024.

Iterator Helpers (and the companion stage two proposal for async iterator helpers) aim to make extremely large (including possibly infinite but enumerable data sets) as easy to work with as finite data structures like arrays. This includes methods like find, filter, map and reduce, which Python developers will be familiar with from generator expressions and itertools (and are available for JavaScript developers through libraries like lodash). “You can have an iterator and then map or for each or check whether some elements are there,” Ehrenberg explains.

Like change array by copy and hashbang grammar, this again brings useful options from other languages, because it’s the kind of feature that’s already widely used in languages like Python, Rust and C#.

“I feel like we’re making pretty good progress towards catching up with Python from fifteen or twenty years ago.”

Almost Time for Temporal

We’re still waiting for Temporal, which former ECMAScript co-chair Brian Terlson once described to us as “the replacement for our broken Date object” (other developers call Date “full of many of the biggest gotchas in JavaScript”). This eagerly awaited top-level namespace for a new date and time API that covers the full range of date, time, time zones, calendars and even public holidays worldwide will give developers far better options for working with the complexities of dates and times.

Although Temporal reached stage 3 in 2021, it’s been waiting for the Internet Engineering Task Force (IETF) to standardize string formats used for calendar and time zone annotations. While there were hopes that it would be completed in 2022, it’s still in draft stage. However, there are no major open issues and Carsten Bormann, one of the editors of the IETF date format proposal, told The New Stack that he believes it’s ready for IETF last call. The delay has been down to procedural questions about amending RFC 3339, internet timestamps, rather than any issues with Temporal or the IETF date and time formats it will use, and that’s being worked through, he said. “We have wide agreement on the parts that Temporal needs; we just need to clear that process hurdle.”

It’s still possible that there could be, for example, changes to the calendar format Temporal uses, but developers can start using Temporal now with polyfills (although you may not want to use that in production). Once the IETF draft is officially adopted, there will still need to be two implementations before it can reach stage four but a lot of that work is already underway.

“I’m really hopeful that this will be the year when we will see Temporal ship in at least one browser.”

“This is being implemented so many times,” Ehrenberg told us. “The implementation is in progress in V8, in [WebKit’s] JSC, in SpiderMonkey; in LibJS, the Serenity OS JavaScript engine, they have a pretty complete Temporal implementation and there are multiple polyfills. In addition to the IETF status, there have also been a number of small bug fixes that have been getting in, based on things that we’ve learned over the course of implementing the feature.”

“Hopefully, in the next few months we will be coming to an end with those bug fixes. And I’m really hopeful that this will be the year when we will see Temporal ship in at least one browser.”

While Temporal isn’t one of the priorities for this year’s Interop browser compatibility project, it did get a lot of votes from developers as an API to consider. “This is visible to browsers — to everyone — that this is high priority,” Ehrenberg said.

Delivering Decorators

The TC39 working group has spent more than five years working on different iterations of the Decorators proposal: a way of adding functionality to an object without altering its original code or affecting other objects from the same class. Decorated functions are available in other languages, like Python and C#, and JavaScript developers have been using transpilers like Babel and TypeScript to get them. Those differ slightly from what the ECMAScript Decorators proposal will finally deliver, but with the help of a proposal from the TypeScript team, TC39 was able to avoid a breaking change.

“A lot of people are using experimental TypeScript decorators or Babel legacy decorators,” Ehrenberg noted: “in either case, you need to explicitly opt into it, but a lot of frameworks do use decorators and do have presets that include them — and those original decorators are a little bit different from what ends up being stage three Decorators.”

“We went through many iterations of the Decorator proposal and we finally arrived at one that we could agree met both the use cases and the transition paths that were needed from previous decorators and the implementability concerns from browsers. We were finally able to triangulate all of that. It does mean that there are some differences, but at the same time we’ve really tried to make sure that the transition is smooth.”

For example, when you export a class that has a decorator, the first Decorators proposal put the decorator before the export keyword — but a later version of the proposal changed the syntax, putting the decorator after the export.

“A lot of the community was pretty upset about the change because it would have transition costs and there were lots of strong opinions in both directions. And at the very last minute, we decided, you know what, you’re allowed to do either — but not both. In one particular exported class declaration, the decorators can come either before or after the exported keyword, because we saw that the transition path from existing use of decorators was important. We want to enable incremental adoption and treat the existing ecosystem as real: we’re not designing this in a vacuum.”

Palmer credits the TypeScript team with putting in extra effort to make sure that TypeScript and JavaScript continue to be aligned. Ehrenberg agreed.

“There was a scary moment where we thought that TypeScript might ship decorator before export without JavaScript allowing it; and I’m really glad that just in time, we were able to convince everyone to agree on the same thing. That’s the birth of standards.”

There will be a slight difference in behavior depending on which order you pick: If you put the decorator before the export keyword, then it won’t be included in the Function.prototype.toString() text. If the decorator comes after export or export default (or is in a class that isn’t exported), it will be included in the string.

Making Resource Management Obvious

Having garbage collection doesn’t mean that JavaScript developers don’t need to think about managing memory and cleaning up resources, like file handles and network requests that are no longer needed. Some of the options for doing that work differ depending on where your code will run: you return a JavaScript iterator but close a Node.js file handle. And they depend on developers remembering to write the code and getting the code right.

“This makes it difficult to translate front-end development skills, where you might primarily work with the DOM, to back-end development, where you might be working with something like Node’s API, and vice versa. This inconsistency also makes it difficult for package authors to build lifetime management into their packages in a way that allows for reuse both on the web and on the server,” the creator of the proposal, Ron Buckton, told us.

Explicit Resource Management adds a new using statement (or await using for async code) to JavaScript, that’s similar to the with statement in Python or using in C#. Like const, it uses block scoping which developers will be familiar with since it’s been in JavaScript since ECMAScript 2015. You can open a resource like a file with using, work with the file, and then at the end of the block of code, the file will be automatically closed by the Symbol.dispose or Symbol.asyncDispose method in the using declaration.

“If closing the file means persisting something to a database you can make sure that you wait for that persistence to happen,” Ehrenberg explained.

If you need to compose multiple resources that will be used and then disposed of, there are container classes — DisposableStack and AsyncDisposableStack — which Buckton says were inspired by Python’s ExitStack and AsyncExitStack — that also let you work with existing objects that don’t yet use the new API.

The asynchronous version, awaitusing, was temporarily split off to a separate Async Explicit Resource Management proposal, because the syntax for that wasn’t as easy to decide on. Now it’s been agreed and has also reached stage three, so the proposals are being combined again and implementations are currently underway, Buckton says. According to Palmer:

“This is great for robust, efficient code, to really make sure you’re cleaning up your resources at the correct time.”

“I think this will be a big win for JavaScript developers, because previously, to get this effect reliably, you had to use try finally statements, which people would often forget to do,” Ehrenberg added. “You want to make sure to dispose of the resource, even if an exception is thrown.”

The feature is called “explicit” to remind developers that the resource cleanup will be done immediately and explicitly, as opposed to the implicit and somewhat opaque resource management you get with WeakMap, WeakRef, FinalizationRegistry or garbage collection. Using gives you an explicit, well-defined lifetime for an object that you know will be cleaned up in a timely way, so you can avoid race conditions, if you’re closing and reopening a file or committing transactions to a database.

“The garbage collector can run at weird and magical times, and you cannot rely on the timing,” Palmer warned.

It’s also not consistent across environments. “All JavaScript engines reserve the right to have reference leaks whenever they feel like it and they do have reference leaks at different times to each other,” Ehrenberg added.

“There are a lot of use cases for explicit resource management, from file IO and Stream lifetime management, to logging and tracing, to thread synchronization and locking, async coordination, transactions, memory management/resource pooling, and more,” Buckton said.

It will be particularly important for resources that have a significant impact on performance, but also drain battery. “I’m hoping that, as this proposal gets adopted by various hosts, we’ll soon be able to use ‘using’ and ‘await using’ with WebGPU and other DOM APIs where resource lifetime and memory management are extremely important, especially on mobile devices.”

Building on What’s New

Having proposals become part of ECMAScript doesn’t mean they don’t carry on developing, as implementers get more experience with them — and as new language features offer ways to improve them.

After a good many years, class fields (including private fields) was included in ECMAScript 2022, “but even though they’ve been shipping in browsers for years, some of the Node community found that there were some performance penalties in using these,” Palmer told us. To address that, Bloomberg funded Igalia to optimize private field performance in V8. “Now private fields access is now at least as fast as public fields and sometimes it’s even faster.”

Other work made it easier for developers to work with private fields by making them accessible inside the Chrome developer tools. From the top level of the console, you can now jump into private fields or look into them while inspecting an object. That doesn’t break any security boundaries, Palmer noted, because you’re in a development environment: “it makes life easier for the developer, and they are entitled to see what’s inside the class”.

In the future, Ehrenberg suggested, there might be a capability for authorized code to look into private fields, based on the stage three decorators proposal, which has features that aren’t in the existing decorators features in Babel and TypeScript. “When you decorate a private field or method, that decorator is granted the capability to look at that private field or method, so it can then share that capability with some other cooperating piece of code,” he explained.

“The new decorators provide a path towards more expressive private fields.”

As always, there are other interesting proposals that will take longer to reach the language, like type annotations, AsyncContext and internationalization work that — along with Temporal — will replace some commonly used but large libraries with well-designed, ergonomic APIs built into the language. There are also higher-level initiatives around standardizing JavaScript runtimes, as well as the long-term question of what ECMAScript can address next: we’ll be looking at all of those soon.

The post The New JavaScript Features Coming in ECMAScript 2023 appeared first on The New Stack.

]]>
Dev News: Angular v16, Next.js Updates and Prep for Deno 2.0 https://thenewstack.io/dev-news-angular-v16-next-js-updates-and-prep-for-deno-2-0/ Mon, 08 May 2023 14:13:29 +0000 https://thenewstack.io/?p=22707382

Angular v16 is the biggest release since the initial rollout of Angular, with “large leaps in reactivity, server-side rendering, and

The post Dev News: Angular v16, Next.js Updates and Prep for Deno 2.0 appeared first on The New Stack.

]]>

Angular v16 is the biggest release since the initial rollout of Angular, with “large leaps in reactivity, server-side rendering, and tooling,” according to Angular product lead Minko Gechev on Wednesday. The new release includes a new reactivity model for Angular, which improves performance and developer experience, he added.

It enables better runtime performance by reducing the number of computations during change detection, Gechev wrote. It also supports fine-grained reactivity — in future releases, that will allow the team to check for changes only in affected components.

We recently wrote about Angular 16, but the official release announcement adds more information. New additions include:

  • The Angular signal library allows developers to define reactive values and express dependencies between them.
  • RxJS Interoperability. ”You’ll be able to easily “lift” signals to observables via functions from @angular/core/rxjs-interop which is in developer preview as part of the v16 release!”, the post explained. “We are introducing a new RxJS operator called takeUntilDestroy… by default, this operator will inject the current cleanup context.”
  • Server-side rendering, a number one opportunity for improving Angular, Gechev wrote, and part of the announcement included news that they’ve launched a developer preview of full app non-destructive hydration. Angular partnered with the Chrome Aurora team to improve the performance and DX of the hydration and server-side rendering.

“In the new full app non-destructive hydration, Angular no longer re-renders the application from scratch,” Gechev wrote. “Instead, the framework looks up existing DOM nodes while building internal data structures and attaches event listeners to those nodes.”

That means no more content flickering on an end-user page, easy integration with existing apps, better web core vitals in certain scenarios and future-proofing the architecture to enable fine-grained code loading with primitives that will ship later this year, he wrote. Early tests show up to 45% improvement of the largest contentful paint with full app hydration, he added.

Version 16 also enables support for TypeScript 5.0, with support for ECMAScript decorators.

There’s also a reminder in the post that Angular will be removing the legacy, non-MDC-based components in v17. A migration guide is available.

Next.js 13.4 Features New App Router

Since the release of Next.js 13 six months ago, the team at Vercel — which created the open source web development framework — has been focused on building “the foundations for the future of Next.js — App Router — in a way that can be incrementally adopted without unnecessary breaking changes,” wrote Next.js lead maintainer Tim Neutkens and Vercel Engineer Sebastian Markbåge.

The release of 13.4 means developers can now start adopting the new router for production.

Next.js is starting to get a bit long in the tooth — it was built six years ago. Its expansion and what developers want to achieve with it has expanded over the years, and that’s created some hiccups, some of which this release aims to correct. For example, one founding principle for Next.js was, “Zero setup. Use the filesystem as an API.” Maintaining this has been tricky as developers have used it for more support for defining layouts, nesting pieces of UI as layouts, and more flexibility over defining loading and error states, the post explained.

“To make our router compatible with streaming, and to solve these requests for enhanced support for layouts, we set out to build a new version of our router,” the blog post stated. “With the Pages Router, layouts were not able to be composed, and data fetching could not be colocated with the component. With the new App Router, this is now supported.”

The App Router also allows developers to fetch data using async and await syntax — without an API.

“By default, all components are React Server Components, so data fetching happens securely on the server,” the post states. ”Critically, the “data fetching is up to the developer” principle is realized. You can fetch data and compose any component. And not just first-party components, but any component in the Server Components ecosystem, like a Twitter embed react-tweet, which has been designed to integrate with Server Components and run entirely on the server.”

The complete post details all the ways Next.js is going back to its core design principles with this new App Router, and provides a short FAQ about the impact of App Router.

Laying the Groundwork for Deno 2

The Deno team is working toward a major release of Deno 2 this year, but first, it’s released Deno v1.33. Deno is a Rust-based runtime for JavaScript, TypeScript, and WebAssembly.

The end goal will be effortless coding, more security features and best-in-class performance, the team wrote. To that effort, Dino 1.33 includes updates such as a built-in KV database and improvements to npm and Node compatibility.

DenoKV is an integrated database within Deno. Developers can start building apps without worrying about installing dependencies, the blog announcement stated. It also adds, however, that KV is currently an unstable API, so devs will need the –unstable flag to use it.

“This release brings a huge quality of life improvement when working with dynamic imports,” the blog added. ”If you use a string literal in an import() call (e.g. import(“https://deno.land/std/version.ts”)) Deno will no longer require a permission to execute this import.”

The change makes it easier to conditionally execute some code in certain situations, the blog post noted, citing a CLI tool that includes many subcommands, and developers might want to “conditionally log their respective handlers only when the subcommands is invoked.” Doing so will significantly improve the startup tie of your tool, the team wrote.

Deno has improved cache handling for npm packages, as well.

“Starting with this release, Deno will try its best to retrieve information from the registry when it encounters a missing version (or a version mismatch) of a package in the cache,” the post stated. This should result in a lot fewer messages suggesting to use –reload flag to retrieve the latest registry information.”

The post Dev News: Angular v16, Next.js Updates and Prep for Deno 2.0 appeared first on The New Stack.

]]>
Why Developers Are Flocking to LLaMA, Meta’s Open Source LLM https://thenewstack.io/why-open-source-developers-are-using-llama-metas-ai-model/ Fri, 05 May 2023 11:00:41 +0000 https://thenewstack.io/?p=22707225

When it comes to generative AI, the open source community has embraced Meta AI’s LLaMA (Large Language Model Meta AI),

The post Why Developers Are Flocking to LLaMA, Meta’s Open Source LLM appeared first on The New Stack.

]]>

When it comes to generative AI, the open source community has embraced Meta AI’s LLaMA (Large Language Model Meta AI), which was released in February. Meta made LLaMA available in several sizes (7B, 13B, 33B, and 65B parameters), but at first it was restricted to approved researchers and organizations. However, when it was leaked online in early March for anyone to download, it effectively became fully open source.

To get an understanding of how developers are using LLaMA, and what benefits it gives them over similar LLMs from the likes of OpenAI and Google, I spoke to Sebastian Raschka from Lightning AI. He told me that developers are attracted to Meta’s LLaMA because — unlike with GPT and other popular LLMs — LLaMA’s weights can be fine-tuned. This allows devs to create more advanced and natural language interactions with users, in applications such as chatbots and virtual assistants.

Raschka should know. His role at Lightning AI is “Lead AI Educator,” reflecting both his academic background (he was previously a University professor in statistics) and his high-profile social media presence (he has 192,000 followers on Twitter and runs a Substack newsletter entitled Ahead of AI).

LLaMA vs. GPT: Release the Weights!

LLaMA isn’t that different from OpenAI’s GPT 3 model, Raschka said, except that Meta has shared the weights. The other major LLMs have not done that.

In the context of AI models, “weights” refers to the parameters learned by a model during the training process. These parameters are stored in a file and used during the inference or prediction phase.

What Meta did, specifically, was release LLaMA’s model weights to the research community under a non-commercial license. Other powerful LLMs, such as GPT, are typically only accessible through limited APIs.

“So you have to go through OpenAI and access the API, but you cannot really, let’s say, download the model or run it on your computer,” said Raschka. “You cannot do anything custom, basically.”

In other words, LLaMA is much more adaptable for developers. This is potentially very disruptive to the current leaders in LLM, such as OpenAI and Google. Indeed, as revealed by a leaked internal Google memo this week, the big players are already concerned:

“Being able to personalize a language model in a few hours on consumer hardware is a big deal, particularly for aspirations that involve incorporating new and diverse knowledge in near real-time.”

As noted LLM developer Simon Willison put it, “while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community.”

Use Cases

So what are some of the use cases for applications being built on top of LLaMA?

Raschka said that finance and legal use cases are good candidates for fine-tuning. However, he noted that larger companies may want to go beyond just fine-tuning and instead pre-train the entire model using their own data. Classification tasks are also popular so far — such as toxicity prediction, spam classification, and customer satisfaction ranking.

According to Raschka, using LLaMA can provide improved performance in apps compared to traditional machine learning algorithms, with accuracy improvements ranging from 5% to 10%. Mostly, this can be achieved just with fine-tuning.

“It’s something that is also accessible to people,” he said, “because you don’t need to pre-train the model. You can just fine-tune it, essentially.”

LoRA and Other Tools

One of the tools developers can use to fine-tune LLaMA is LoRA (Low-Rank Adaptation of Large Language Models), which is available for free on Microsoft’s GitHub account. I asked Raschka how this works.

He began by saying there are various techniques for fine-tuning LLMs, such as hard tuning, soft tuning, prefix tuning, and adapter methods. He explained that the adapter method is attractive because it allows training of the whole LLM, while keeping the rest of the transformer frozen — which results in smaller parameters and faster training time. LoRA is one type of adapter method and Raschka said it uses a mathematical trick to decompose large matrices into smaller matrices, resulting in fewer parameters and more storage efficiency. In effect, this means you can do the fine-tuning in much quicker time.

“When I do the smaller method, where I only have these intermediate layers like LoRA, it takes only one to three hours instead of 18 hours on the same data set, basically. So it’s an advantage because you have smaller parameters.”

Techniques like LoRA are useful for deploying LLMs to multiple customers, he added, as it only requires saving the small matrices.

Devs and Fine-Tuning

Fine-tuning is a step beyond prompt engineering, so I asked Raschka whether developers will need to learn how to do it?

Raschka thinks that understanding how to use language models will be a useful skill for developers, but it’s not necessary for them to be in charge of fine-tuning the models at their company unless they have very specific needs. For small companies, they can use a general tool like GPT, and for larger companies he thinks there will be a team member who is in charge of fine-tuning the models.

What developers are definitely interested in is implementing AI models into their existing applications. This is where Raschka’s employer, Lightning AI, comes in. It offers an open source framework called PyTorch Lightning, which is used for implementing deep learning models. Lightning AI also offers cloud access and helps users deploy machine learning systems on the cloud. Incidentally, the creator of PyTorch Lightning, William Falcon, was a Ph.D. intern at Facebook AI Research during 2019 — which likely influenced Lightning AI’s support of LLaMA.

Also worth noting: Lightning AI has its own implementation of the LLaMA language model called Lit-LLaMA, which is available under the Apache 2.0 license. Researchers from Stanford University have also trained a fine-tuned model based on LLaMA, called Alpaca.

Conclusion

LLaMA does seem like a great option for developers wanting more flexibility in using large language models. But as Raschka points out, while fine-tuning is becoming increasingly accessible, it is still a specialized skill that may not be necessary for every developer to learn.

Regardless of whether or not they do the fine-tuning, developers increasingly need to understand how to use LLMs to improve certain tasks and workflows in their applications. So LLaMA is worth checking out, especially since it’s more open than GPT and other popular LLMs.

The post Why Developers Are Flocking to LLaMA, Meta’s Open Source LLM appeared first on The New Stack.

]]>
Top 5 NLP Tools in Python for Text Analysis Applications https://thenewstack.io/top-5-nlp-tools-in-python-for-text-analysis-applications/ Wed, 03 May 2023 17:52:14 +0000 https://thenewstack.io/?p=22707067

Text analysis applications need to utilize a range of technologies to provide an effective and user-friendly solution. Natural Language Processing

The post Top 5 NLP Tools in Python for Text Analysis Applications appeared first on The New Stack.

]]>

Text analysis applications need to utilize a range of technologies to provide an effective and user-friendly solution. Natural Language Processing (NLP) is one such technology and it is vital for creating applications that combine computer science, artificial intelligence (AI), and linguistics. However, for NLP algorithms to be implemented, there needs to be a compatible programming language used.

In this article, we will discuss using NLP tools in Python for text analysis applications — including available libraries, and how they can be used.

The Purpose of Natural Language Processing

NLP is a type of artificial intelligence that can understand the semantics and connotations of human languages, while effectively identifying any usable information. This acquired information — and any insights gathered — can then be used to build effective data models for a range of purposes.

In terms of text analysis, NLP algorithms can perform a range of functions that include:

  • Text mining
  • Text analysis
  • Text classification
  • Speech recognition
  • Speech generation
  • Sentiment analysis
  • Word Sequencing
  • Machine translation
  • Creating dialog systems
  • and more

This functionality has put NLP at the forefront of deep learning environments, allowing important information to be extracted with minimal user input. This allows technology such as chatbots to be greatly improved, while also helping to develop a range of other tools, from image content queries to voice recognition.

Text analysis web applications can be easily deployed online using a website builder, allowing products to be made available to the public with no additional coding. For a simple solution, you should always look for a website builder that comes with features such as a drag-and-drop editor, and free SSL certificates.

Natural Language Processing and Python Libraries

Python, a high-level, general-purpose programming language, can be applied to NLP to deliver various products, including text analysis applications. This is thanks to Python’s many libraries that have been built specifically for NLP.

Python libraries are a group of related modules, containing bundles of codes that can be repurposed for new projects. These libraries make the life of a developer much easier, as it saves them from rewriting the same code time and time again.

Python’s NLP libraries aim to make text preprocessing as effortless as possible, so that applications can accurately convert free text sentences into a structured feature that can be used by a machine learning (ML) or deep learning (DL) pipeline. Combined with a user-friendly API, the latest algorithms and NLP models can be implemented quickly and easily, so that applications can continue to grow and improve.

The Top 5 Python NLP Libraries

Now that we have an understanding of what natural language processing can achieve and the purpose of Python NLP libraries, let’s take a look at some of the best options that are currently available.

1. TextBlob

TextBlob is a Python (2 and 3) library that is used to process textual data, with a primary focus on making common text-processing functions accessible via easy-to-use interfaces. Objects within TextBlob can be used as Python strings that can deliver NLP functionality to help build text analysis applications.

TextBlob’s API is extremely intuitive and makes it easy to perform an array of NLP tasks, such as noun phrase extraction, language translation, part-of-speech tagging, sentiment analysis, WordNet integration, and more.

This library is highly recommended for anyone relatively new to developing text analysis applications, as text can be processed with just a few lines of code.

2. SpaCy

This open source Python NLP library has established itself as the go-to library for production usage, simplifying the development of applications that focus on processing significant volumes of text in a short space of time.

SpaCy can be used for the preprocessing of text in deep learning environments, building systems that understand natural language and for the creation of information extraction systems.

Two of the key selling points of SpaCy are that it features many pre-trained statistical models and word vectors, and has tokenization support for 49 languages. SpaCy is also preferred by many Python developers for its extremely high speeds, parsing efficiency, deep learning integration, convolutional neural network modeling, and named entity recognition capabilities.

3. Natural Language Toolkit (NLTK)

NLTK consists of a wide range of text-processing libraries and is one of the most popular Python platforms for processing human language data and text analysis. Favored by experienced NLP developers and beginners, this toolkit provides a simple introduction to programming applications that are designed for language processing purposes.

Some of the key features provided by Natural Language Toolkit’s libraries include sentence detection, POS tagging, and tokenization. Tokenization, for example, is used in NLP to split paragraphs and sentences into smaller components that can be assigned specific, more understandable, meanings.

NLTK’s interface is very simple, with over 50 corpora and lexical resources. Thanks to a large number of libraries made available, NLTK offers all the crucial functionality to complete almost any type of NLP task within Python.

4. Genism

Genism is a bespoke Python library that has been designed to deliver document indexing, topic modeling and retrieval solutions, using a large number of Corpora resources. Algorithms within Genism depend on memory, concerning the Corpus size. This means it can process an input that exceeds the available RAM on a system.

All the popular NLP algorithms can be implemented via the library’s user-friendly interfaces, including algorithms such as Hierarchical Dirichlet Process (HDP), Latent Dirichlet Allocation (LDA), Latent Semantic Analysis (LSA/LSI/SVD), and Random Projections (RP).

Genism’s accessibility is further enhanced by the plethora of documentation available, in addition to Jupyter Notebook tutorials. However, it should be noted that to use Genism, the Python packages SciPy and NumPy must also be installed for scientific computing functionality.

5. PyNLPl

Last on our list is PyNLPl (Pineapple), a Python library that is made of several custom Python modules designed specifically for NLP tasks. The most notable feature of PyNLPl is its comprehensive library for developing Format for Linguistic Annotation (FoLiA) XML.

The platform is segmented into different packages and modules that are capable of both basic and advanced tasks, from the extraction of things like n-grams to much more complex functions. This makes it a great option for any NLP developer, regardless of their experience level.

Conclusion

Python is the perfect programming language for developing text analysis applications, due to the abundance of custom libraries available that are focused on delivering natural language processing functions.

Five of the best NLP libraries available are TextBlob, SpaCy, NLTK, Genism, and PyNLPl. This is based on their accessibility, intuitive interfaces, and range of functionality.

The post Top 5 NLP Tools in Python for Text Analysis Applications appeared first on The New Stack.

]]>
Vercel Offers Postgres, Redis Options for Frontend Developers https://thenewstack.io/vercel-offers-postgres-redis-options-for-frontend-developers/ Mon, 01 May 2023 16:00:40 +0000 https://thenewstack.io/?p=22706763

Increasingly, cloud provider Vercel is positioning itself as a one-stop for frontend developers. A slew of announcements this week makes

The post Vercel Offers Postgres, Redis Options for Frontend Developers appeared first on The New Stack.

]]>

Increasingly, cloud provider Vercel is positioning itself as a one-stop for frontend developers. A slew of announcements this week makes that direction clear by adding to the platform a suite of serverless storage options, as well as new security and editing features.

“Basically, for the longest time, frontend developers have struggled to come to define how you put together these best-in-class tools into a single platform,” Lee Robinson, Vercel’s vice president of developer experience, told The New Stack. “The idea here really is what would storage look like if it was reimagined from the perspective of a frontend developer.”

All of the announcements will be explored in a free upcoming online conference of sorts later this week.

Rethinking Storage for Frontend Developers

Vercel wanted to think about storage that works with new compute primitives, such as serverless and edge — functions that mean frontend developers don’t have to think through some of the more traditional ways of connecting to a database, Robinson said.

Developers are moving away from monolithic database architectures and embracing distributed databases “that can scale and perform in the cloud,” the company said in its announcement. Vercel also wants to differentiate by integrating storage with JavaScript frameworks, such as Next.js, Sveltekit or Nuxt, Robinson said.

The new options came out of conversations in which developers said they wanted first-party integration with storage and a unified way to handle billing and usage, a single account to manage both their compute as well as their storage, all integrated into their frontend framework and their frontend cloud, Robinson added.

“Historically, frontend developers — trying to retrofit databases that were designed for a different era — have struggled to integrate those in modern frontend frameworks,” Robinson said. “They have to think about manually setting up connection pooling as their application scales in size and usage. They have to think about dialing the knobs for how much CPU or storage space they’re allotting for their database. And for a lot of these developers, they just want a solution that more or less works out of the box and scales with them as their site grows.”

The three storage products Vercel announced this week are:
1. Vercel Postgres, through a partnership with Neon.

Postgres is an incredible technology. Developers love it,” Robinson said. “We wanted to build on a SQL platform that was reimagined for serverless and that could pair well with Vercel as platform, and that’s why we chose to have the first-party integration with Neon, a serverless database platform, a serverless Postgres platform.”

The integration will give developers access to a fully managed, highly scalable, truly serverless fault-tolerant database, which will offer high performance and low latency for web applications, the company added. Vercel Postgres is designed to work seamlessly with the Next.js App Router and Server Components, which allow web apps to fetch data from the database to render dynamic content on the server, Vercel added.

2. Vercel KV, a scalable, durable Redis-compatible database.

Redis is used for key-value store in frontend development. Like Postgres, Redis is one of the top-rated databases and caches for developers, he said. Developer loves its flexibility and API and the fact it’s open source, he said.

“These databases can be used for rate limiting, session management and application state,” Vercel stated in its press release. “With Vercel KV, frontend developers don’t need to manage scaling, instance sizes or Redis clusters — it’s truly serverless.”

Vercel’s lightweight SDK works from edge or serverless functions and scales with a brand’s traffic.

“The interesting thing here — and what I’m really excited about with this one — is that traditionally, a lot of Redis instances would be ephemeral. So you would use them as a cache, you would store some data in them, and that cache would expire,” Robinson said. “The cool thing about durable storage, or our durable Vercel KV for Redis, is that you can actually use it like a database. You can store data in there and it will persist. So developers get the power and the flexibility that they love from Redis.”

3. Vercel Blob, a secure object storage, which has been one of the top requests from the Vercel community. Vercel Blob offers file storage in the cloud using an API built on Web standard APIs, allowing users to upload files or attachments of any size. It will enable companies to host medium complex apps entirely on Vercel without the need of a separate backend or database provider.

“Vercel Blob is effectively a fast and simple way to upload files,” Robinson said. “We’re working in partnership with Cloudflare and using their R2 product that allows you to effectively very easily upload and store files in the cloud, and have a really simple API that you can use; again, that works well with your frontend frameworks to make it easy to store images or any other type of file.”

Each offers developers an easy way to solve different types of storage problems, he said.

“If you step back and you look at the breadth of the storage products that we’re having these first-party integrations for, we’re trying to give developers a convenient, easy way to solve all of these different types of storage solutions,” Robinson said.

New Security Offerings from Vercel

Along with Vercel’s new storage products, the frontend cloud provider has also launched Vercel Secure Compute, which gives businesses the ability to create private connections between serverless functions and protect their backend cloud. Previously, companies had to allow all IP addresses on their backend cloud for a deployment to be able to connect with it, Vercel explained. With Vercel Secure Compute, the deployments and build container will be placed in a private network with a dedicated IP address in the region of the user’s choice and logically separated from other containers, the press release stated.

“Historically on the Vercel platform, you’ve had your compute, which is serverless functions or edge functions, and when we talk to our largest customers, our enterprise customers, they love the flexibility that offers, but they wanted to take it a step further and add additional security controls on top,” Robinson said. “To do that, we’ve offered a product called Vercel Secure Compute, which allows you to really isolate that compute and put it inside of the same VPC [virtual private cloud] as the rest of your infrastructure.”

It’s targeting large teams who have specific security rules or compliance rules and want additional control over their infrastructure, he added. Along with that, they introduced Vercel Firewall, with plans to introduce a VPN at some point in the future.

“The same customers when they’re saying, ‘I want more control, more granularity over my compute,’ they also want more control over the Vercel Edge network, and how they can allow or block traffic. So with Vercel firewall we’re giving our enterprise customers more flexibility for allowing or blocking specific IP addresses,” Robinson said.

Visual Editing Pairs with Comments on Preview

The company also released Vercel Visual Editing, which dovetails on the company’s release in December of Comments on Preview Deployments. Visual Editing means developers can work with non-technical colleagues and across departments to live-edit site content. To do that, Vercel partnered with Sanity, a real-time collaboration platform for structured content, to introduce a new open standard for content source mapping for headless CMS [content management systems]. The new standard works with any framework and headless CMS, the company added.

Vercel used it for the blog posts it’s creating about the new announcements, collectively nicknamed Vercel Ship, allowing the team to edit the content.

“The way that visual editing pairs into this, it actually works in harmony with Comments,” he said. “So for example, all of the blog posts that we’re working on for this upcoming Vercel Ship week, we’re using a combination of comments, as well as visual editing to allow our teams to give feedback say, ‘Let’s change this word here to a different word. Let’s fix this typo.’ Then the author or the editors can go and click the edit button go in make those changes directly and address the comment.”

The post Vercel Offers Postgres, Redis Options for Frontend Developers appeared first on The New Stack.

]]>
Dev News: Babylon.js 6.0, Vite Update, and the Perils of AI https://thenewstack.io/dev-news-babylon-js-6-0-vite-update-and-the-perils-of-ai/ Sat, 29 Apr 2023 16:00:53 +0000 https://thenewstack.io/?p=22706719

Babylon.js 6.0 was released this week. The web-based 3D framework is a WebGL-based graphics engine with a visual scene builder

The post Dev News: Babylon.js 6.0, Vite Update, and the Perils of AI appeared first on The New Stack.

]]>

Babylon.js 6.0 was released this week. The web-based 3D framework is a WebGL-based graphics engine with a visual scene builder and best-in-class physics-based rendering. The update incorporates new physics plugins, fluid rendering, screen reader support and improvements to how reflections are handled, according to a blog post by Babylon.js.

“We are very proud and excited to announce that the world-famous Havok team is bringing a new physics implementation to Babylon.js for FREE!” The blog post states. “Over the past year, we’ve been secretly working with the incredible Havok team to make some of the most advanced physics features on the web available to you, the amazing Babylon.js developers community!”

The Havoc Engine is best known for running games such as Assassin’s Creed: Odyssey and The Legend of Zelda: Breath of the Wild. Havok’s expertise come to Babylon.js through a special Wasm plugin, along with a complete overhaul of the Babylon.js Physics API, the post noted. That will provide more power, control and features while making Babylon.js 6.0 easier to use, the company stated. A demo is available or you can try it out for yourself.

The update also introduces new performance priority modes that can produce up to 50x faster rendering and performance. Developers can choose between Backwards Compatibility Mode, Intermediate Mode, or Aggressive Mode.

Babylon.js 6.0 also incorporates:

  • New fluid rendering;
  • An updated screen space reflection model;
  • A new Texture Decals feature that allows developers to project a decal through a mesh’s UV space to be overlaid on top of the objects material’s texture. “This unlocks some fun new interaction possibilities for truly immersive web experiences without sacrificing performance,” the blog post noted;
  • Node Material, which allows developers to create complicated and interactive shaders without writing a single line of code. “With this work, it is now possible for developers to build more advanced 3D Graphics techniques into Node Material shaders including things like Ray Marching — check out the proof of concept using the new Node Material changes”;
  • Node Material Tri-Planar and Bi-Planar Projection Nodes. The former enables placing project textures onto 3D objects regardless of the mesh UVS. Bi-planar node works in a similar way on two 2D textures instead of three, which saves GPU calculations and leads “to seamless textures with a smaller hit to performance”; and
  • GUI Editor moves out of beta with this release. “This version builds on the Beta with a ton of stability improvements and bug fixes, but most importantly introduces a tighter connection to your Babylon.js playgrounds,” the blog post stated.

There’s also a Figma-to-Babylon.js community extension by James Simonson, which allows developers to export Figma GUI designs directly into Babylon.js scenes. Finally, the core Babylon.js scene tree is now visible to screen readers, improving accessibility — as the screen reader can now narrate scene elements and text to describe the scene to users.

Babylon.js 6.0 also includes a restructuring of Babylon.js’ documentation to make it more accessible to those who want to learn Babylon.js and those who want to integrate Babylon.js into their existing web applications, the post said.

Vite 4.3 Released

Vite 4.3 was released this week with performance improvements, according to this announcement. Vite is an open source development tool used for modern web applications, which comes with a dev server and bundler.

The website has more details, but this chart from the announcement shows some of the improvements over the previous Vite release.

Vite 4.3 Performance Improvements

Image via Vite

Vite 4.3’s vite-plugin-inspect now has more performance-related features, to allow developers to pinpoint plugins or middleware that are bottlenecks for applications.

Using…

vite --profile


…and then pressing “p” once the page loads will save a CPU profile of the dev server startup, the announcement noted.

“You can open them in an app as speedscope to identify performance issues,” the release stated. “And you can share your findings with the Vite Team in a Discussion or in Vite’s Discord.”

State of JavaScript 2022 Released

The State of JavaScript 2022 shows that Typescript’s popularity continues to grow, with more developers saying they use only Typescript (20.7%) — compared to 8.2% for vanilla JavaScript. The framework Svelte also saw a jump in usage, nearly doubling its adoption, although it’s still behind React, Angular and Vue.js, which have dominated for four years running, according to this blog post by Andrzej Wysoczański, head of frontend at The Software House.  

“A lot of frontend developers have an eye on Svelte, so it’s only a matter of time before it joins the most used JavaScript frameworks in the future,” Wysoczański wrote, adding that his team of developers created a simple game with Svelte in 2019 to see how it works. “The experience of working with Svelte was quite positive, and we are closely following its further growth,” he added.

More AI Means More Productivity and Problems

“Big Code” is when a codebase is made up of millions or even billions of lines of code, written by thousands of developers over time — and using AI for software development is likely to escalate the Big Code problem, predicts VentureBeat.

The article cites a recent study by Sourcegraph that queried more than 500 software developers and engineers and found that 95% were already using AI tools to write code. Among the AI they’re tapping into are ChatGPT, GitHub’s Copilot, and Cody, an AI coding assistant launched by Sourcegraph. Meanwhile, the report also found that only 65% of companies have a Big Code plan. Even fewer had a specific plan for using AI.

The report calls AI the best thing to happen to development in terms of productivity, but potentially the worst thing to happen in terms of codebases, subsequent technical debt and, of course, the security implementations.

The post Dev News: Babylon.js 6.0, Vite Update, and the Perils of AI appeared first on The New Stack.

]]>