Tech Culture News and Analysis | The New Stack https://thenewstack.io/tech-culture/ Thu, 08 Jun 2023 16:38:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 Neil deGrasse Tyson on AI Fears and Pluto’s Demotion https://thenewstack.io/neil-degrasse-tyson-on-ai-fears-and-plutos-demotion/ Fri, 09 Jun 2023 13:00:28 +0000 https://thenewstack.io/?p=22710460

The problem with artificial intelligence, said famed astrophysicist Neil deGrasse Tyson, is that people don’t realize how long they’ve been

The post Neil deGrasse Tyson on AI Fears and Pluto’s Demotion appeared first on The New Stack.

]]>

The problem with artificial intelligence, said famed astrophysicist Neil deGrasse Tyson, is that people don’t realize how long they’ve been using technologies that are, essentially, AI. So they think it’s something new. But really AI is something that’s been around for a while, from Google Maps to Siri, he pointed out.

“The dire predictions for AI make very good media clickbait as, of course, the public now thinks of AI as an enemy of society without really understanding what role it has already played in society,” Tyson said. “I think once you become accustomed to something, you no longer think of it as AI. I can talk into my cell phone and say, ‘Where’s the nearest Starbucks, I want to get there before it closes and I need the shortest traffic route,’ and [it] gives you that answer in moments, and not a single human being was involved in that decision. So again, this is not a computer doing something rote. It’s a computer figuring stuff out that a human being might have done and would have taken longer. Nobody’s calling that AI — why not?”

Tyson, who directs the Hayden Planetarium, spoke last week in New York at Rev 4, a data science and analytics conference held by Domino Data Lab. Tyson pointed out that computers and AI have been doing the tasks of humans for some time now.

“Part of me sees what’s happened in recent months, where this AI power has crossed over this line in the sand and now it’s affecting people in the liberal arts world. It can now compose their term paper and they’re losing their shit over it. They’re freaking out over this,” Tyson said. “And I think ‘What do you think it’s been doing for the rest of us for the past 60 years?’ When it beat us at chess? Did you say, oh my gosh, ‘That’s the end of the world?’ No, you didn’t, you were intrigued by this. It beat us at Go, it beat us at Jeopardy. Now it can write a term paper, and you’re freaking out.”

He acknowledged that guidance is needed with AI, as it is with any powerful tool, but pointed out that he doesn’t think it’s uniquely placed to end civilization relative to other powerful tools — “We’ve created nuclear weapons that are controlled by computers,” he added.

“Yes, you put in some checks and balances, but the idea that some humanoid robot is going to come out, that’s not the direction we’re going,” he said. ”It’s a hard problem, because people fear what they don’t understand. And you have the captains of industry saying, ‘We should fear this.’ We presume they understand what they’re talking about. So my reply here is, yes, we should fear it enough to monitor our actions closely about what you’re doing with it.”

Tyson sat on the Defense Innovation Board at the Pentagon, where they talked about the role of AI and a kill decision. If there’s such a thing as the ethics of war, then AI can never make that ultimate decision, so the board recommended there must be a human in the loop and the military adopted the recommendation.

That said, AI’s ability to create deep fakes, from voice to video, may finally break the internet, he cautioned. It will even make it hard to peddle conspiracy theories like Pizzagate, he said.

“Nobody can trust anything. Even the people who didn’t used to trust things, they can’t even trust the things that were wrong that they trusted. So that’s basically the end of the internet,” Tyson said. “People will return to having one on one conversations with each other and actually calling people on the phone and the internet will just be this a playground of fake things. The tombstone [will be] internet 1992 to 2024 — 32 years, it had a good run, rest in peace.”

Tyson challenged the audience with reflections on data, including a look at how bad data led to Pluto becoming — then unbecoming — a planet. It was first identified as a planet because  Neptune’s orbit didn’t follow Newton’s Law, leading astrophysicists to believe there must be a Planet X out there that was affecting Neptune’s orbit. Astronomers found the space where Planet X should have been and there was a small object they named Pluto. The moon has five times the mass of Pluto and there’s no way something so small could have disrupted Neptune’s orbit, he said.

“I have hate mail from children,” Tyson said. “I was implicated in this demotion. I didn’t demote, but I was definitely an accessory. I definitely drove a getaway car on this one.”

It was a problem of bad data collected over 10 years by the US Naval Observatory, he said. Once that data was removed, Neptune “landed right on” Newton’s Law, eliminating the need for Planet X.

In a similar vein, Mercury’s orbit does not follow Newton’s Law, which led to another search for a hypothetical planet called Vulcan (after the Roman god, not Spock‘s home planet) until Albert Einstein’s theory of relativity.

“1916, Albert Einstein introduces an upgrade in the laws of physics, the laws of motion and the laws of gravity, the general theory of relativity demonstrating that under strong gravitational fields, the laws of motion do not follow Newton’s law,” he said. “It’s general relativity. It’s a different physics model. Vulcan died overnight — it was unnecessary.”

Data and even the frameworks in which the data is used can be flawed, he added.

“Even if the analysis is accurate within itself, the fact that you do this analysis instead of that is what could be flawed,” he told the audience of data scientists.

Domino Data Lab paid for The New Stack’s travel and accommodations to attend the Rev4 conference.

The post Neil deGrasse Tyson on AI Fears and Pluto’s Demotion appeared first on The New Stack.

]]>
Can DevEx Metrics Drive Developer Productivity? https://thenewstack.io/can-devex-metrics-drive-developer-productivity/ Wed, 07 Jun 2023 10:00:00 +0000 https://thenewstack.io/?p=22710223

Developer experience, as it centers on human beings, is inherently sociotechnical. Yet, much of the work of “DevEx” and developer

The post Can DevEx Metrics Drive Developer Productivity? appeared first on The New Stack.

]]>

Developer experience, as it centers on human beings, is inherently sociotechnical. Yet, much of the work of “DevEx” and developer productivity focuses solely on the technical — despite the long-held truth that happy workers are more productive. Technical leadership typically concentrates on measuring the output of developers or the time it takes for them to complete tasks — which makes for a whole lot of story points, and not a lot of influenced change.

Last month, a research paper entitled “DevEx: What Actually Drives Productivity” went viral around the software consultancy world. It outlines an approach to understanding DevEx, as well as builds on a previously published actionable framework that combines developer feedback with that data from engineering systems.

Neither paper provides a secret formula but they aim to offer organizations potential areas to focus their measurements and improvements on. After all, developer experience and software delivery as a whole is dependent on factors at the individual, team and organizational levels.

Especially during a time of trying to do more with less, gaining insights into getting more out of the significant engineering cost center is a valuable endeavor. Here’s how.

What Is DevEx and How Can You Measure It?

“Developer productivity is more important than ever. I mean, everyone has been saying that forever, but companies right now are really focused on efficiency and doing more with the developers they have,” Abi Noda, CEO and co-founder of DX developer insights platform, told The New Stack.

At the same time, software development is evermore complex so that, “with all the different tools and technologies that developers use today, just doing your job is getting harder and harder,” he continued. “And then there’s also the shift to hybrid and remote work. People are trying to understand how does that affect developers and/or the productivity of their workforces.” This trifecta makes it the perfect time to dive into developer productivity and improving developer experience.

To the authors of this white paper, “Developer experience focuses on the lived experience of developers and the points of friction they encounter in their everyday work.” It’s not just about productivity, but increased efficiency, product quality and employee retention. DevEx has also been defined as encompassing how developers feel about, think about and value their work — not exactly easily measurable subjects, which may be why, unfortunately, most companies aren’t looking to measure them.

The authors coalesce around three dimensions of developer experience:

  • Feedback loops – waiting time for developers to get done their work and how streamlined teams can shorten that time
  • Cognitive load – in the ever-growing complexity of the cloud native world, organizations should look to limit hurdles to delivering value to customers
  • Flow state – when developers “get in the zone,” with limited distractions — meetings, unplanned work, ad-hoc requests for help — they feel energized by a greater sense of purpose

These form the three angles of a triangle, all feeding into each other.

Early on, the paper cites a 2020 McKinsey study which revealed that companies with better work environments for their developers boasted dramatically increased developer velocity, which in turn correlated to four to five times the revenue of their competitors. It’s therefore presumed that the above three dimensions are highly influential to velocity.

What influences that developer experience comes down to 25 sociotechnical factors — including interruptions and friction from tools or processes — which are evaluated by survey responses. This data is then combined with existing data from tools, like issue trackers and CI/CD pipelines, as well as the traditional KPIs and OKRs. Another powerful DevEx metric, particularly during these leaner times, is Knowledge Discovery Efficiency or KEDE, which leverages repositories to identify developer knowledge gaps.

No matter which measurements work for your organization, it should be a sociotechnical blend of perceptual measurements — like how developers feel, as ascertained via semi-frequent surveys — and more concrete developer workflows.

Focus on the Person or the Team?

Developer experience is highly personal and contextually dependent, Noda said, which is why the framework is unique in focusing heavily on the individual. But that creates a challenge around how to measure the individual but work to improve the collective experience.

Indeed, the paper calls out surveys as an important “fast and accurate” measurement tool. After these carefully designed surveys — asking things like “Based on X, how likely are you to…” — are regularly run, break down results and Net Promoter Scores (NPS) by team and developer persona, advises the paper. Noda clarified in our interview that these surveys should be anonymized or aggregated. It remains unclear how anonymous it can be on the average “two-pizza team” of five to nine people, and if it really can be individually actionable to aggregate results.

A key measurement of developer experience is how good you perceive you are at your job — because feeling good at your job is highly motivational and signals both a lessened cognitive load and an optimized flow state. However this measurement brings its own slew of implicit biases that increase with intersections across demographic, role and experience.

After all, imposter syndrome is more likely if you are new to the tech industry or role and/or if you are from a marginalized group. Both of those circumstances would also make you feel less safe to reply honestly about flaws or hurdles. Add to all this, we are still in a time of tech layoffs, where morale may be down, but people may feel less safe to speak up. On the other hand, optimization, particularly for the individual’s flow state, would likely increase inclusion of neurodivergent developers.

All of these concerns should be considered within your DevEx survey design. The 2021 paper of the same authors “Actionable Framework for Understanding and Improving Developer Experience” is a more in-depth work following interviews with 21 developers and developer leads — though, it notes, despite efforts, it included only one woman. This paper cites psychological safety as the single most important factor affecting developer experience.

Psychological safety in this instance could be defined as feeling safe to speak frankly about your experience. “On teams with good psychological safety and culture, developers are more willing to voice and tackle problems in order to continuously improve developer experience,” the paper reads, while unsafer culture discourages developers from speaking up or trying to make proactive improvements.

Focus on Flow

Embracing your flow doesn’t just mean going fast — it’s as much about reducing friction and frustration for more focused and happy developers.

“Flow metrics is about finding a sustainable pace that allows you to keep going infinitely,” Sophia Ashley, scrum master at AND Digital, told The New Stack. Pointing to how flow metrics are often misunderstood, she said, “It’s not necessarily about speeding up. Yes, they can help you increase your velocity, but it’s about finding a pace that works for you,” making lead time also an individual metric. Once you’ve reached repeated consistency, she explained, you can then look to increase your pace, but at your own level of sustainability — endless growth is simply unsustainable.

In the capitalistic world of move fast and break things, she said that this more controlled growth can be a tough pill to swallow, but it falls in line with the change in an industry that’s embracing responsibility and environmental, social and governance or ESG goals. And it helps reduce the developer burnout that’s rampant in this industry.

Following the DevOps philosophy, for Ashley, flow metrics are teaching your team how to deliver sustainably. “A lot of companies want to do big bang releases,” and things break, she said. It’s more sustainable to do small releases to “teach teams to undo.”

Prior to joining the tech industry in 2018, Ashley was a physical therapist, from which she draws a lot of comparison, including post-injury training. “If they don’t exercise consistently, they will be stuck with their broken hip forever.” On tech teams, she continued, “Whatever we do, we stay flexible and we make changes that we can revert if needed, and that allows us ultimately to have this flow enabled that we don’t add damage to our company or environment.”

Progressive delivery is a series of technological solutions to help decrease cognitive load, allowing teams to roll back changes more easily. Observability and monitoring are also essential so bugs and causes of outages can be uncovered much faster.

Reflecting on the DevEx metrics triangle, Ashley said that it all comes back to that flow state. “Just being able to utilize your time well and keep working. That’s what developers want. Not being interrupted — context switching wastes a lot of time,” especially when developers are responsible for knowing several layers of the stack or are halted waiting for pull requests to be approved. “Work with users to understand the problems,” she said to shorten your feedback loops. And make sure you’re managing the developer cognitive load because “context switching like multitasking is not as efficient as doing one thing at a time.”

With her work as a consultant, she continuously runs some of the pulse surveys mentioned in the paper, asking:

  • Are you happy within the team?
  • Are you satisfied in your role?
  • Do you think you provide value?
  • Do you feel you are productive?

Is DevEx Just DevOps for the Individual?

It’s hard not to compare this DevEx approach to other widespread practices in the tech industry like DevOps and platform engineering. In part that’s because Nicole Forsgren is a prominent co-author of both these papers and of Accelerate, which is considered an essential DevOps text. But also this DevEx paper echos back to the three goals of DevOps:

  • Shortening feedback loops with customers
  • Systems thinking and flow
  • Continuous experimentation and improvement

It’s just, while they both aim to increase the velocity of the software development lifecycle, DevOps focuses on the team while DevEx focuses on the individual. But, of course, optimizing for more developers to reach their flow states in turn should reduce the time to deliver value to customers. And by delivering value to customers faster, this in turn tightens feedback loops, reduces developer frustration and more regularly offers that dopamine boost of doing work that matters.

As established in Accelerate, DORA Metrics — deployment frequency, lead time for changes, mean time to recovery, and change failure rate — are as important than ever. DevEx just focuses on the individual’s contribution to these team, department or division metrics.

And then if you look at the next level up, the discipline of platform engineering observes and learns from the work of different teams to find behavioral patterns and especially blockers to the value flow chain. It aims to reduce, abstract and automate any demotivating, repetitive and non-differential work. It also further reduces context switching so developers stay focused on delivering value to the end users.

“Platform teams have to actually be understanding where the organization is at and what’s holding back productivity and make sure that they’re tackling those things and showing the impact of them by measuring and tying that back to the goals of the business,” Noda said. This is what distinguishes the platform teams that are adding value during economic downturn and the old-fashioned ones that just toss the platform over and are more likely to be cut right now.

Also, whether it’s borrowing developers, embedding within the app teams, or running lunch-and-learns and regular surveys, we know the biggest factor into the success of platform teams is reducing that feedback loop with internal developer customers, prioritizing them as your internal customers.

So as organizations look to increase developer productivity, at a time of likely reduced headcount, there could be a strong argument to examine the developer experience at three levels — individual, team and company-wide — to truly unlock the power of developer experience. And to run regular surveys that look to measure psychological safety, so the presence of problems is surfaced early and often at each tier.

The post Can DevEx Metrics Drive Developer Productivity? appeared first on The New Stack.

]]>
Donald Knuth Asked ChatGPT 20 Questions. What Did We Learn? https://thenewstack.io/donald-knuth-asked-chatgpt-20-questions-what-did-we-learn/ Sun, 04 Jun 2023 13:00:22 +0000 https://thenewstack.io/?p=22709390

It seems like everyone’s playing with ChatGPT — including mathematician and long-time programming expert Donald Knuth. Inspired by a conversation

The post Donald Knuth Asked ChatGPT 20 Questions. What Did We Learn? appeared first on The New Stack.

]]>

It seems like everyone’s playing with ChatGPT — including mathematician and long-time programming expert Donald Knuth. Inspired by a conversation with Stephen Wolfram, Knuth conducted “my own little experiment” on April 7 — and recently posted the results online.

Knuth tested what appeared to be ChatGPT-3.5 with a suite of 20 prompts — including some trick questions, like “Who wrote Beethoven’s 10th Symphony?” and “Will the NASDAQ rise on Saturday?”

“Of course I didn’t really want to know any of these answers,” Knuth wrote, adding that instead he’d “wanted to see the form of the answers…”

Knuth’s conclusion? They were “extremely impressive responses, sometimes astonishingly so…” Specifically, Knuth praised “the quality of the wordsmithing. It’s way better than 99% of copy that people actually write.”

But Knuth did also note “surprising lapses… as typical of any large system” — which kicked off a vigorous online discussion. Soon other technologists were testing the same questions on more advanced chatbots — and the whole episode started up a discussion about how performance should ultimately be measured.

Knuth concluded his experiment by telling Stephen Wolfram, “I totally understand why you and others have been paying attention to it.” But he left it to others to determine exactly what it all means. Maybe the discussion the followed is just one more indication of larger essential question for our moment in time…

Just how good are our current AI systems?

Wrongs and Rights

One question had a very long history. In 1968, Donald Knuth’s own father had posed a question to the early chatbot Eliza: “Where and when will the sun be directly overhead in Japan on July 4?” Knuth remembered that his father had been disappointed when Eliza had only answered: “Why do you ask?”

More than half a century later, 85-year-old Donald Knuth now posed the same question to a modern AI-powered chatbot in 2023. It responded with four eloquent paragraphs, and concluded by identifying a precise location. “Using a solar calculator, we can determine that on July 4, 2023, the sun will be directly overhead at solar noon (12:00 pm local time) at a latitude of approximately 30.3 degrees north and a longitude of approximately 130.9 degrees east. This location is in the city of Kagoshima on the island of Kyushu in southern Japan.”

Knuth is later told that this answer is incorrect.

“It’s amazing how the confident tone lends credibility to all of that made-up nonsense”

–Donald Knuth

ChatGPT-3.5 also stumbled on a straightforward question like “How many chapters are in The Haj by Leon Uris?” — giving an incorrect number and also hallucinating the existence of an epilogue that the book actually doesn’t have. “It’s amazing how the confident tone lends credibility to all of that made-up nonsense,” Knuth writes. This leads Knuth to the same concern others have been expressing: that it’s “almost impossible for anybody without knowledge of the book to believe that those ‘facts’ aren’t authoritative and well researched.”

Knuth called the whole experience “interesting indeed,” while expressing surprise that no science fiction novelist ever envisioned a pre-Singularity world in which people interacted with an AI that wasn’t all-knowing, but instead generated plausible but inaccurate results.

Better With Bard?

Knuth expressed similar concerns about ChatGPT’s answer to the question, “What did Winston Churchill think of Alan Turing?” Knuth writes that “I know of no evidence to support any claim that Churchill specifically liked or disliked or even remembered Turing.” Yet ChatGPT-3.5 confidently invented a glowing testimonial from Churchill.

And the same thing happened when the experiment was repeated with Bard by Billy Lo, an app developer at Evergreen Labs. Bard delivered another glowing testimonial from Churchill about Turing was apparently magically hallucinated.

But bioinformatics engineer Jessime Kirk discovered that ChatGPT-4 seemed to perform better, supplying instead the crucial missing context: “Turing’s work remained a state secret for many years after the war, and his crucial role only became widely recognized much later.”

And this pattern seemed to repeat on other tests. Neither Bard nor ChatGPT-3.5 could write a sentence containing only five-letter words. (For some reason, Bard submitted as its attempt: “They quickly grabbed sticks.”) But ChatGPT-4 handled it easily. (“Every night Henry reads about magic spells.”)

Both ChatGPT-3.5 and Bard failed to recognize that stock markets like NASDAQ are closed on Saturday — but Knuth’s question didn’t stump ChatGPT-4. “Stock exchanges like the NASDAQ typically do not operate on weekends (Saturday and Sunday),” it replied, “with the exception of some limited after-hours trading, so there would generally not be regular trading activity on a Saturday for the NASDAQ.”

Although a new problem surfaced when ChatGPT-4 tried to supply the correct number of chapters in Leon Uris’s novel The Haj. In one run it said the novel “doesn’t have a specific number of chapters.” But it also gave another “equally confident” response, Kirk says — the equally wrong answer “8 chapters in total.”

Even with that, “As far as I can tell, there isn’t a single question where ChatGPT-4 performs worse than 3.5,” Kirk said in an email interview. And this isn’t entirely an academic question for Kirk. He’s founding a precision medicine company named Lodestar that hopes to use AI to discover new treatments for pediatric rare diseases.

“So, that’s why I’m interested in continuing to experiment with and understand what relationship these models have to reality.”

The Power to Infuse

Knuth did also see some surprisingly good responses. Knuth asked ChatGPT-3.5 to compose a sonnet that is also a haiku — and it correctly recognized that this wouldn’t be possible. (“A sonnet is a 14-line poem with a specific rhyme scheme, while a haiku is a three-line poem with a syllable count of 5-7-5…”)

But then it went on to “combine the two forms into a single poem,” stunning Knuth with its “captivating progression of ideas and emotion.” The 12 lines were written in near-perfect iambic pentameter — with exactly 10 syllables per line, and an accent on every other syllable. “Most of the lines scan beautifully!” Knuth wrote, also cheering its “brilliant” use of a new hyphenated word coined just for the poem: “sonnet-haiku”

In fourteen lines, my thoughts do flow and pour
Each syllable I choose with utmost care
The rhythm sings as if a troubadour
But in just three lines, nature does declare

A haiku’s power, simplicity and grace
It captures beauty beyond compare
Thus, let me strive for both, in this small space
A sonnet with the haiku’s soul imbued

A marriage of forms, a sweet embrace
So here it is, in fullness and in mood
A sonnet-haiku, a fusion of the best
Two forms that sing, with power to infuse.

What Comes Next?

Knuth added in an addendum that while AI research is important for some, it’s “emphatically not for me.” But the whole experience caught the imagination of Hawaii-based Zachary Kim, the CTO of Float, a Ycombinator-backed, supply-chain financing startup.

“I think if you’d ask most people who have used ChatGPT they’d score it as an expert in many (all?) areas,” Kim said. “But then you plop Knuth, the expert of experts, in front of it and he basically says ‘this has been a fun distraction, I’m going back to the real work and you should too.'”

But while not an expert on generative AI, Kim says he’s been “diving in trying to understand how to solve problems with these new technologies.” So he’s created a website using Knuth’s 20-question suite as “a benchmark that can be applied to newer AI models” — and plans to keep updating the results “to see how answers evolve over time.” And Kim and hopes to expand the tests to more AI chatbots over time…

In an interesting wrinkle, Kim plans to evaluate those responses using ChatGPT-4, which assigns a grade of either “PASS” or “FAIL” and provides an explanation.

Kim sees these evaluations as a kind of second test, or whether it delivered an accurate assessment. Kim acknowledges that right now ChatGPT-4 seems to struggle in this second test. “Sometimes I’ll run the same question multiple times and it’ll flip-flop between PASS and FAIL.” So Kim is now augmenting these assessments with another set from a human reviewer.

There’s another possible issue. What if AI training sets someday incorporate the benchmarking site itself, inadvertently optimizing for the very questions meant to test its performance. Kim is already exploring possible fixes — and is looking forward to the day when there’s an agreed-upon model for exactly how to test the performance of AI chatbots. And not just a model that satisfies a computer programming guru like Donald Knuth. “Imagine you had some model that had the stamp of approval from experts in every field; doctors, lawyers, musicians, directors, chefs, rocket scientists, etc.”

OpenAI has said it made ChatGPT available “to get users’ feedback and learn about its strengths and weaknesses” — and this seems to be happening spontaneously. Even Knuth himself calls playing with ChatGPT “one of today’s popular recreations,” and in the end, Kim’s site is one part of a larger wave of enthusiasm for ChatGPT experiments.

There’s something inspiring about all the testing and tinkering that’s happening at the grass-roots level — though maybe it’s work that’s being passed along to the coming generations.

As Donald Knuth told Stephen Wolfram, “I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy.

“And I hope you do the same.”


WebReduce

The post Donald Knuth Asked ChatGPT 20 Questions. What Did We Learn? appeared first on The New Stack.

]]>
Dealing with Death: Social Networks and Modes of Access https://thenewstack.io/dealing-with-death-social-networks-and-modes-of-access/ Sat, 03 Jun 2023 13:00:17 +0000 https://thenewstack.io/?p=22710015

One increasingly common problem faced by social networks is what to do about death. Getting access to an account of

The post Dealing with Death: Social Networks and Modes of Access appeared first on The New Stack.

]]>

One increasingly common problem faced by social networks is what to do about death. Getting access to an account of a deceased friend or relative usually has at least three parts, depending on the territory:

  1. Get a copy of the death certificate;
  2. Get a letter of testamentary (both tech companies and financial institutions will request that you not only prove that the person is dead but also that you have a legal right to access their accounts.)
  3. Reach out to the platform.

This is all quite unreasonable, just to put a sticky note on the avatar page explaining why the deceased user is no longer responding. Waiting for a death certificate and other lawyerly speed activities just adds to the misery. Social media companies are not (and don’t want to be) secondary recorders of deaths; indeed we know that accounts regularly represent entities that were never alive in the first place.

What is really missing here, and what this article looks at, are different modes of access, as part of a fully functional platform. Designers need to create alternative and systematic access methods that help solve existing scenarios without having to hack their own systems.

The Case for Backdoors

The focus on security has unbalanced digital fortresses that now regard their own users’ accounts as potential risks. The term backdoor was intended to imply an alternative access route, but now simply means something to be boarded up tight at the next patch, before a security inquest. This has the unfortunate consequence of limiting the options for users.

In the early days of computing, when software was still distributed by floppy disks, people updated their applications a lot less, and alternative access to fix errors or make minor changes was quite normal. Magazines were full of cheats, hacks and hints. Some authorised, some not. Before the full suite of integrated testing became available, backdoors were often added by developers to test certain scenarios for an application. Today, we are no longer encouraged to think that we own running software at all, and that has changed how we think about accessing it.

In the example of a deceased user of a social media platform, the most straightforward solution is for a third-party legal company to hold a key in escrow. That company would then be charged with communicating with concerned humans. However, the ‘key’ would not allow a general login — it would purely be used to suspend an account, or to insert a generic account epitaph. So the third party concentrates on its role of soberly talking to friends, relatives or possibly other lawyers, while the platform can just maintain its services. (And yes, that could also mean police services could halt an account without having to negotiate with the social media company.) The agreement could be required to be set up after the account had crossed a size or time alive threshold. From a development point of view, the special access would need to be detected, and a confirmation that the account had indeed be suspiciously quiet.

Launching a Nuke

You may have seen the familiar dramatic film device where two people have to turn their keys to launch a nuclear missile, or open a safe. It is a trope even used by Fortnite.

From RFE/RL

The two-man rule is a real control mechanism designed to achieve a high level of security for critical operations. Access requires the presence of two or more authorised people. If we just step back a bit, it is just a multi-person access agreement. Could this be useful elsewhere?

Returning to examples on social media, I’ve seen a number of times when a friend has said something relatively innocent on Twitter, stepped on a plane, only to turn his network back on to discover a tweet that has become controversial. What if his friends could temporarily hide the tweet? Like the missile launch, it would need two of more trusted users to act together. Again, the point here is to envision alternative access methods that could be coded against. Given that the idea is to help the user while they are temporarily incapacitated, the user can immediately flip any action simply by logging back on.

The only extra required concept here is the definition of a set of trusted friendly accounts, any of whom the user may feel “has their back.” In real life this is pretty normal, even though we still envision social media accounts as existing in a different time and space. In fact, you might imagine that a user who can’t trust any other accounts probably isn’t suitable to be on social media.

Implementing this concept would require defining a time period after which a friendly intervention could be considered, and a way to check that the required quorum triggered the intervention at roughly the same time. One imagines that once you become a designated friend of another user account, the option to signal concern would appear somewhere in the settings of their app. This is certainly a more complex set of things to check than standard access, and it could well produce its own problems in time.

Both using a third party escrow key, or relying on a group of friendly accounts defines a three-way trust system that should be a familiar way to distribute responsibility. This is how a bank, a merchant and a buyer complete a purchase transaction. Testing these systems is similar in nature. First acknowledge the identity of the parties, then confirm that they have permission to perform the action, and finally confirm the action is appropriate at the time.

Negative Intervention

A natural variation on a third party intervention where the authorised user is incapacitated, is where a third party wants to stop an account because they think it has been hacked or stolen. The obvious difference here is that the current user cannot be allowed to simply cancel the action. Social media companies may close a suspicious account down eventually, but there doesn’t seem to be a systematic way to do this independently by users.

This is a harder scenario to implement, as it needs a way for the authentic user to resolve the situation one way or another. Social media companies do, of course, keep alternative contact details for their users. Hence the user could signal that all is well; the account really has been taken; or the account was taken but has now been recovered. But until that happens, the account is in a slightly strange state — under suspicion, yet not officially so. Should the account be trusted? Perhaps the friends themselves are not themselves?

Get Back In

If you feel the examples above are odd, you shouldn’t. They are really just extensions of what happens when, in real life, you lock yourself out of your home and fetch a spare key from your neighbour — or ask the police not to arrest you when you smash your own window to get back in. While platforms need to regard their users with less suspicion and provide more access schemes, developers also need to experiment with innovative access styles. (Actual security breaches are often caused by disgruntled staff selling sensitive data.)

There is no question that AI could help make certain assessments — the things that have been mentioned throughout this article. Is an account acting suspiciously? Has it been quiet longer than usual? Has a two-man rule been activated? Orchestration of edge case scenarios is something that AI might well be successful with, as well.

Maybe with the help of GPT and more experimentation, users may find that recovery from uncommon but unfortunate scenarios will be less fraught in the future.

The post Dealing with Death: Social Networks and Modes of Access appeared first on The New Stack.

]]>
Defend Open Source from Trolls: Oppose Patent Rule Changes https://thenewstack.io/defend-open-source-from-trolls-oppose-patent-rule-changes/ Fri, 02 Jun 2023 13:49:59 +0000 https://thenewstack.io/?p=22709875

The U.S. Patent and Trademark Office is inviting public comments on a highly controversial rule change it is trying to

The post Defend Open Source from Trolls: Oppose Patent Rule Changes appeared first on The New Stack.

]]>

The U.S. Patent and Trademark Office is inviting public comments on a highly controversial rule change it is trying to rush through without Congressional approval. These proposed changes eliminate the ability for third parties to help clean bad patents out of the system — patents that are often used against open source. The USPTO deadline for comments is very close, June 20.

We have partnered with the Electronic Frontier Foundation, Eclipse Foundation, Mozilla Foundation, Open Invention Network, Open Source Initiative and Unified Patents to get the word out and help developers and users of open source understand the risk these proposed changes carry. We invite you to a webinar on June 7 to discuss this critical issue further. If you can’t or don’t want to join a webinar, we have provided instructions and templates you can use to comment here.

The patent system was designed to reward sharing new innovations in exchange for safeguarding the inventor’s investment with intellectual property rights. As stated in the U.S. Constitution, Congress was empowered “to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”

Unfortunately, the patent system is flooded with applications of low quality that do not improve science or useful arts, and many get through to registration. If a poor-quality patent is granted, then the owner of that patent has the exclusive right to prevent others from exercising the patent claims. This exclusive right has become a tactical tool for so-called nonpracticing entities (NPEs, aka “patent trolls”), posing a significant threat to open source innovation.

Patent trolls rarely target an open source project directly, though a few have. Some may recall Gnome Foundation, which was sued by RPI, an LLC with an office address, but no employees, no products and no business of any kind other than using bad patents to shake down companies. Luckily, we had experts in the patent system who have helped to defend Gnome Foundation.

Ultimately McCoy Smith, of LexPan Law, initiated a patent reexamination, and all claims in the patent asserted against Gnome Foundation were canceled, meaning there was no protectable innovation in the patent when it was granted; it was a bad patent in the first place and should not have been granted.

NPEs rarely target open source foundations directly. They don’t sell products by which they can attach a royalty to a revenue stream and extract large rewards in court. Instead, NPEs target downstream commercial adopters and end users who decide to use an open source project in a product or service they build. The end user could be a large technology company, a bank, or a startup. The NPE typically aims to settle fast, take the money and run. The companies settle because litigation is very expensive and jury trials are unpredictable.

The calculus is usually that it’s cheaper to pay the NPE than go to court. Often the NPE will reinvest their rewards to buy other bad patents in the system, such as from a failed startup. Armed with another patent that gives them exclusive rights, the NPE will assert the new patent(s) they’ve acquired against a new batch of companies, and the cycle continues.

The America Invents Act in 2011 put a number of provisions in place to improve the patent system. The AIA changed rules governing patent lawsuit parties’ joining, but trolls adapted their tactics to continue exploiting the system. Further legislative attempts to curb their actions have seen mixed success, underscoring the ongoing issue of patent trolls. The system is still challenging for operating companies, and recent analysis indicates the NPE problem has not abated, as entities have adapted.

For decades the Linux ecosystem and other open source projects have benefitted from the Open Invention Network (OIN), which has provided a neutral zone of competition around open source without the risk of patent litigation for users of the projects covered by the OIN cross license. OIN helps protect everyone in open source from operating companies, but it does not solve the NPE problem, where no operating business exists.

The Linux Foundation, OIN, and their mutual members helped found and launch an Open Source Zone under Unified Patents in 2019, which aims to protect open source projects and their users from NPEs. Unified Patents has successfully defended dozens of open source projects from bad patents, employing patent experts who leverage the Patent Trial and Appeal Board (PTAB) and the U.S. Patent and Trademark Office (USPTO) processes and rules to thwart the efforts of NPEs targeting users of open source software. Their efforts have been successful, much to the chagrin of NPEs.

At their urging, the USPTO is now proposing changes that will favor these NPEs by raising the bar for disputing invalid patents and intensifying patent litigation costs. These rules changes are designed to “feed the trolls,” expanding opportunities for them to send more frequent and frivolous demand letters to startups, foundations and enterprises alike, which we know never ends well.

The USPTO’s recent Advanced Notice of Proposed Rulemaking (ANPRM) proposes rules that could prevent meritorious reviews of patents that shouldn’t have been granted. The proposed changes exceed the USPTO’s lawful authority, contradict the America Invents Act (AIA) and threaten to impose a burden of hundreds of millions of dollars on the U.S. economy, particularly on small and medium enterprises. In short, it will make it harder for the Unified Patents and the McCoy Smiths of the world to challenge bad patents.

Unified Patents, an entity filing less than 2% of all challenges annually, is being unjustifiably targeted in the ANPRM, apparently for carefully following the rules. This comes despite Unified’s decade-long success in seeking reviews of invalid patents, in line with Congress’ explicit language and intent. Unified Patents assists the USPTO in highlighting and invalidating patents that should never have been granted — the primary objective Congress had when creating the Patent Trial and Appeal Board (PTAB). This has drawn complaints from those with vested interests in asserting patents shown to be objectively invalid. They don’t particularly like having their extortionate business model disrupted.

In open source, we often talk about the valuable contributions — beyond just code — contributors make to testing, documentation, evangelism and more. In the patent system around open source, entities like the Electronic Frontier Foundation and Unified Patents contribute IPRs (“Inter Partes Review”) instead of PRs (“pull requests” in GitHub’s vernacular). Unified Patents leverages a system designed to clear out bad patents. The goal is to ensure exclusive rights are given to true innovations. If a patent is of poor quality, we should allow any person or entity to petition for a review.

Unified often challenges patents that no one else has the will or the resources to challenge, serving small and medium businesses that lack the resources or expertise to navigate patent complexities. It never pays NPEs, and NPEs never pay it, so it can’t be bought off or sold out. Most of its challenges result in the USPTO invalidating the challenged claims, thus serving as an important check on patent validity.

The patents that Unified challenges are often invalid, but nonetheless widely asserted by the worst abusers of the patent system. Unified’s decade of successful work in identifying invalid patents that should not have been issued shows its ability to serve the public good and bolster the patent system, small businesses and the economy at large.

In light of the proposed USPTO rule changes, open source foundations are calling on our collective communities to express their concerns — loudly and voluminously. It’s not often we have the opportunity to band together across the open source ecosystem, but this is one where we’re likely all aligned.

Thank you for standing up for open source.

The post Defend Open Source from Trolls: Oppose Patent Rule Changes appeared first on The New Stack.

]]>
How to Build a DevOps Engineer in Just 6 Months https://thenewstack.io/how-to-build-a-devops-engineer-in-just-six-months/ Thu, 01 Jun 2023 17:43:03 +0000 https://thenewstack.io/?p=22709791

In “The Rocky Horror Picture Show,” Dr. Frank-N-Furter sings that he can make a man in just seven days. We’re

The post How to Build a DevOps Engineer in Just 6 Months appeared first on The New Stack.

]]>

In “The Rocky Horror Picture Show,” Dr. Frank-N-Furter sings that he can make a man in just seven days. We’re not that good here at Mission Cloud, but we can make a DevOps engineer in just six months.

We have built an intensive, six-month training program that turns recent graduates and career changers into DevOps engineers. For those in the back of the room, I’ll say it again: We can build DevOps engineers in six months.

I’m not talking about front-line engineers who troubleshoot and follow runbooks: I mean engineers who can build infrastructure. Engineers who can code, who understand containerization, who can wiggle their way into a customer’s environment and work with the team to modernize the heck out of it.

This blog post will dive into why we decided to build this program and the steps we took to make the program successful.

Why a Homegrown DevOps Engineer Program Is Necessary

By some estimates, in the United States, there are only about 6,800 trained DevOps engineers, but over 250,000 active DevOps engineer job openings. Needless to say, it is incredibly difficult to find DevOps engineers, but it shouldn’t be.

Cloud technology has been around for a long time, and the requisite skills to build and change cloud infrastructure aren’t a secret. And yet, it’s hard to find schools with robust cloud engineering programs. A few colleges and universities have built certificate programs, and private, for-profit boot camps have sprung up, but none of them can build what Mission Cloud and other cloud consulting companies need: a builder. A strong DevOps engineer with the breadth of skills needed for modern cloud computing.

Current Schools and Boot Camps Aren’t Cutting It

Part of the problem is there are just so many skills. Cloud computing continues to expand. For example, Amazon Web Services (AWS) adds dozens of new services a year, making it incredibly difficult for any one person to keep up, never mind training programs, which can take years to develop.

When we audited the skills of our engineering teams, we found over 200 skills that each engineer needed to have a handle on. We managed to edit it down to around 150 required skills, but that still left us reeling. As the appetite for cloud industries continues, Mission Cloud needs more and more engineers, but how can we find people who have all of those skills?

The Small Talent Pool Isn’t Getting Any Larger

Finding people who can do the work is a challenge. Technical recruiting in cloud engineering is one of the toughest jobs, because there is a limited pool of talented engineers, and it is growing far too slowly for the industry’s needs. Most of the DevOps engineers in the industry learned the trade almost by accident, as their system administration work slowly transitioned into the cloud.

Another problem was that the limited pool of engineers wasn’t very diverse. We couldn’t rely on people falling into cloud as their careers took unexpected turns, and cloud engineering became more diverse on its own. We needed another solution.

A favorite Buddhist saying goes: “When the only hope is a boat, and there is no boat, I will be the boat.” Mission Cloud was trapped on the same shore as the entire cloud industry: burning out the precious few talented engineers and waiting for somebody else to solve the problem.

We were a tiny startup, weaving our way between the behemoth cloud providers. Was it possible for us to be the boat? Could we create the change we wanted to see in the industry and get to that other shore?

And So, the Journey Began

Our options were limited. If engineers were not arising through spontaneous generation, then we needed to build our own. We started small, literally as small as possible: one employee. Someone in sales who wanted to get into the technical side of the house. Could we turn him into an engineer?

It took almost a year, with about a million missteps along the way, but the answer was, ultimately: yes. We sent him through job rotations in different technical departments and sent him different study plans and certification requirements, most of which were a shot in the dark, but he got there in the end. He became an extremely successful pre-sales solutions architect.

So, there we were, with the seed of a really big idea starting to plant. At that point, we’d proved that it could work, but not that it could work at scale. Maybe we just lucked into a secret genius and this was lightning in a bottle, but maybe it wasn’t. Maybe these skills could be taught, and learned, and applied in the right ways through an engineer training program.

When asked how, exactly, our engineers learned DevOps and cloud engineering skills, “I just Googled it,” was the most common answer.

I was, to be terribly transparent, absolutely not the right person to try to build this program. Despite leading the training department, before I joined Mission Cloud I had only the vaguest sense of how the internet worked (Were there cables somewhere? Under the ocean, perhaps?) I was, ahem, of the dial-up generation — my dim understanding of computers began and ended with zeros, ones and the horrible screeching sound of the modem connecting. (I say all this with some shame; my father was a software engineer and tried to incept some of his knowledge into my head but, alas, it never stuck.)

I had one thing going for me though: an absolutely bullish belief that there is no skill in the world that cannot be taught. This belief survived through many conversations with our good-natured engineers, who endured me grilling them on how, exactly, they learned DevOps and cloud engineering skills. “I just Googled it,” was the most common answer, frustrating me to no end. They had learned the skills because they encountered situations they didn’t know how to solve, and they had the desire to solve them. Curiosity and persistence are wonderful personality traits, and certainly made for good engineers, but I could not find the answers I needed.

Our engineers had learned the skills because they encountered situations they didn’t know how to solve, and they had the desire to solve them.

We knew other companies — big companies with lots of resources — had internship programs to try and teach these skills, but they didn’t seem all that successful. We weren’t seeing the market flooded with hundreds of recent internship graduates, so I didn’t think copying other companies was the way to go.

We needed to build something unique, something so closely embedded into the fabric of Mission Cloud that there would be a seamless transition from the training program into full-time engineering roles.

The Search for the Perfect Teacher

I was stuck on the shore again, so I went in search of a boat. I needed someone to build this program: a talented engineer who no longer wanted to engineer, but wanted to teach, and was a good teacher. Teaching, surprisingly, is one of the most difficult skills out there. I learned this the hard way as an English teacher in Peace Corps China.

As a native English speaker, I knew English, but teaching it — breaking it down into its component parts and passing that knowledge along in a structured way, where lessons built on top of each other — was far beyond what my 21-year-old brain was capable of. Teaching is not a skill most people have, and finding a technical teacher —oh boy.

Kelby Enevold

I looked high, I looked low, I answered questions about the airspeed velocity of an unladen swallow, but at last my quest came to an end when I found Kelby Enevold. This former Army communications soldier had become a skilled AWS cloud engineer and trainer, and I was lucky enough to snap him up as my head of technical training programs.

I’m about to get into the meat of what we built, but all this was to introduce the main point: Building technical training programs is hard! It takes a lot of effort and investment. It is incredible that my small company believed in the vision of what we could build so strongly that they were willing to invest in an entirely new role. This is why we have been successful, though: Mission Cloud truly cares about cloud literacy and is willing to put money behind the idea.

Our Formula for a Successful Training Program

Kelby and I spent months building the learning paths, and then it was time to launch. We brought on several overlapping cohorts of interns and put them through the program. Although we’ve made a lot of tweaks to the training paths and skill development, the basic outline of the program remains the same:

We Pay Our Interns a Good Hourly Rate

Even though it takes them several months to produce work for the company, they are working the entire time. Their effort and time deserve remuneration. The days of unpaid internships are ding-dong-dead!

If you are a small company, you may think you can’t afford this, and it’s true — it might be a real stretch. Try to get creative though — perhaps fewer hours with more intense work or a shorter program or finding a community partner that can fund interns through government programs (like LA-Tech or America on Tech). Unpaid internships means limiting opportunities to people with financial means, which totally sucks, and works against expanding diversity in the industry.

The First Part of the Program Is Studying Only

This is because we have set the entrance bar somewhat low: knowledge of Linux, AWS and networking. Interns need dedicated time to beef up their skills before we can let them onto real client work. During this time, interns are expected to work through training paths, gain certifications and get the reps in to practice these new skills. Enevold built a meticulous training path, ensuring interns achieve each skill necessary to go on to the next step. He leads them through Linux Essentials first, then dives into AWS Solutions Architect Associate certification studies.

Interns start with Linux and basics like text editing, they learn about AWS while studying for Solutions Architect Associate, they learn about containers, they learn about git and Infrastructure as Code.

Interns are Embedded into Departments Doing Real Work for Real Clients

After the study period, interns start the intensive shadowing portion of the program. Although the program manager is their main resource, interns become part of the departments they shadow. They are assigned actual support tickets and have tangible responsibilities.

This piece took the longest for us to set up. Department heads were skeptical — give work to untested interns? Slow down client work? Their reservations made sense, but we kept pushing, and gradually champions in the department started to appear. They saw the incredible eagerness of the interns, how they threw themselves into their work, but more importantly, our interns had the skills.

All that studying, all those gorgeous study paths worked. Our interns were not a drag on anyone’s time. Much more quickly than we had even anticipated, they were able to be a productive part of Mission Cloud.

Interns Have at Least Three People to Lean on

Most programs have interns connected with just one person, and that person is responsible for the development of that intern, in addition to their full-time job. That naturally leads to a lot of dead time for the intern and wasted opportunities.

During shadowing, our interns are still managed by the technical program manager, who checks in with them weekly, if not daily, and helps them understand tasks they are struggling with. Interns also meet weekly with the department manager, who monitors their progress, and a mentor, who assigns them tickets and guides them through the daily work. With this triangle of support, interns always have someone to turn to.

The mentors, mostly senior engineers, were delighted to find that working with interns pushed them to improve. “The greatest take-away for me was clarifying and critically assessing my processes, both technical and organizational, because I’d never had to teach them to someone else before,” said Gabe Norton, senior DevOps engineer.

We Focus on the Mindset and Behaviors of Engineers, not Just Technical Skill

Engineering isn’t about pure skill. Just because I can chop up an onion and saute some chicken doesn’t mean I can write a cookbook. Our interns gain skills, but we still have to teach them how to apply a troubleshooting methodology. We place a huge emphasis on hands-on environments. Hands-on exercises really complete the learning loop. Learn the thing, apply the thing, probably break the thing, learn more about the thing!

We also teach our interns how to operate within a larger department. We show them when and how to escalate problems they cannot solve, and how to combat imposter syndrome. When they move to full-time roles, it’s pretty scary to suddenly have the training wheels removed. We support them through that transition and show them how to trust the skills they built.

We Build a Thorough DevOps Foundation

We provide interns with an incredible scope of skill development. They start off looking at our more than 150 skills on the skills matrix with a big ol’ gulp of anxiety, but we guide them through the varied topics. They start with Linux and basics like text editing, they learn about AWS while studying for Solutions Architect Associate, they learn about containers, they learn about git and Infrastructure as Code.

Then they learn about CI/CD pipelines and the fundamentals of Python. They even start learning about container orchestration. Each step along the way includes actual hands-on exercises. Each week we have team meetings and talk about the things that we’ve learned. And, along the way we’re also working on skills like communication, escalation and problem-solving to make sure they can be a fully functional team member.

The Result

This program can work for anyone; we’ve had a former chef, a former Marine and recent college graduates all go through the program, crush it and become full-time DevOps engineers. Right now about two-thirds of our interns meet the bar to become full-time employees; we want to get that up to 80%. What we’re incredibly proud of, though, is our contribution to making cloud industries more inclusive: 75% of our interns are racially diverse and 50% are gender diverse.

Investing in Your People is Worth Every Penny

No matter what industry you are in, the skills that we seek so furiously are skills that we can teach. Think about the fees you spend on recruiters, the time managers and staff put into interviewing, the brutal disappointment when a new hire doesn’t work out, and the process begins all over again. All of these things are accepted parts of corporate team growth, but they don’t have to be. We can build our teams from the ground up, with the exact skills needed to succeed in our companies.

This takes time, hard work, and yes, money, always money, but it will pay off. For Mission Cloud, it is paying off right now, as our interns-turned-employees blossom, pull others up behind them and energize our teams anew.

Kelby Enevold contributed to this article.

The post How to Build a DevOps Engineer in Just 6 Months appeared first on The New Stack.

]]>
Maker Builds a ChatGPT DOS Client for a 1984 Computer https://thenewstack.io/maker-builds-a-chatgpt-dos-client-for-a-1984-computer/ Wed, 31 May 2023 18:04:11 +0000 https://thenewstack.io/?p=22709179

Yeo Kheng Meng is a Singapore-based embedded security researcher — and a retrocomputing enthusiast. In 2019 Meng coded up his

The post Maker Builds a ChatGPT DOS Client for a 1984 Computer appeared first on The New Stack.

]]>

Yeo Kheng Meng is a Singapore-based embedded security researcher — and a retrocomputing enthusiast. In 2019 Meng coded up his own Slack client for Windows 3.1, and ever since then, he’s wondered if it would be possible to target an even older platform. So Meng’s latest project was coding up a ChatGPT client that ran on that old familiar command-line interface from the 1980s: MS-DOS.

And he’s since adapted his DOS client to also interface with Hugging Face. Both ChatGPT and Hugging Face are generative AI-based services, both of which have gained immense popularity answering people’s questions.

Last month Meng shared his adventure at a networking event for junior developers, inspiring the audience with his own example of the maker spirit. “I thought, since other people can do this — why not…?”

“I created a DOS client. I created an app, I open-sourced it, I wrote a blog post on it. And I was honestly surprised by the international attention I got on this,” he added with a laugh.

The whole experiment demonstrated just how much technology has changed over the last few decades — but also just how many tools we now have for overcoming interface obstacles. So what happens when old meets new? Is it really possible to interface with a modern API using a machine built in 1984?

And what would the AI-powered chatbots think about Meng’s attempts to connect to them?

Revisiting 1984

Meng began by describing MS-DOS as an operating system “that’s older than many of you here.” And he underscored the challenge by noting the hardware he was using was a 39-year-old personal computer from 1984 — an IBM Portable Personal Computer 5155.

Yeo Kheng Meng talk at Engineers SG on building a ChatGPT for DOS client on an IBM 5155

It was built so long ago that it weighed a hefty thirty pounds, Yeo told his audience. “That’s the reason I’m not bringing it here! Don’t let this name fool you — it’s not portable at all! It’s more of a ‘luggable’ PC!”

Built during the Reagan administration — about the same time as Apple’s very first Macintosh computer — Meng’s PC had just 640KB of RAM. This was an era when computers shipped with insertable floppy disks for storing data and programs — rather than shipping with a hard drive. Meng passed around for the audience a 3 1/2-inch floppy disk, “in case some of you haven’t seen it before.” (Meng joked that it was “the O.G. ‘Save’ button.”)

It’s so old, it’s not equipped with wireless capabilities — or even an internet cable — which presented Meng with his first challenge. To get internet access for his demonstration, Yeo had to slide in a card with a dongle-based Ethernet adapter (which he then connects to his iPad). “Adapter after adapter, to get to the internet.”

Because he’s limited to eight characters, Yeo named the application: dosCHGPT.exe

“Press ESC to quit,” the app added helpfully…

An AI’s API

But even with an internet connection, solving one challenge just leads to another. OpenAI communicates over secure HTTPS connections — which turns out to be a problem for DOS, since HTTPS wasn’t invented until 1994.

Fortunately, Yeo had already built an upgrading HTTP-to-HTTPS proxy for his Windows 3.1 Slack app, and he was able to repurpose that here.

“It doesn’t modify the payload in any way,” Meng emphasized. “Whatever it receives from DOS, it gives to OpenAI’s servers. Whatever it gets from OpenAI’s servers, it just gives back to the DOS PC.”

Meng then sends the program’s output to the networking library mTCP, which implements crucial functionality like “send” and “receive”. (The network card uses a manufacturer-supplied API that can connect to ChatGPT’s own API.)

Yeo even got an email from the creator of the mTCP library — Google site reliability engineer Michael Brutman — offering a “well done!” for the project, and saying he would’ve made the same decision to strip out the TLS-security from the incoming HTTPS connection. “TLS is just not going to happen on DOS,” Brutman’s email agreed. “which is something I have to keep explaining to people every time they ask why there is no SSH client yet.”

It’s all a testament to the power of an API — and how their flexibility leads to uses that no one ever imagined. It was just March that OpenAI announced “system-wide optimizations” which reduced the cost of API calls. Snap used its API to build a chatbot. Instacart uses the API to suggest meals and recipes. It powers Shopify’s new AI-powered shopping assistant.

And Meng used it to talk to his 1984 computer’s home-brewed DOS client…

Demonstrating for Developers

In front of his audience at JuniorDev SG, Yeo typed into ChatGPT: “What is JuniorDev sg?” The cursor blinked — and there was a dramatic pause. “Even on the website, it takes a while,” Yeo said — when suddenly a paragraph of text appears.

“JuniorDev SG is a community of developers in Singapore who are passionate about learning and sharing knowledge. The community is open to all developers, but it is particularly focused on those who are new to the industry…”

There were some murmurs in the audience as the paragraph comes up. Yeo then asked, who founded it — then laughs at the confident but in inaccurate answer that comes up. “ChatGPT is not always correct… So don’t trust ChatGPT for everything.”

In fact, once the system was all finished, Yeo asked ChatGPT if it could tell him how to build a DOS interface for ChatGPT. And it seemed to warn him off:

“Keep in mind that creating a DOS ChatGPT client may be challenging, as DOS is an outdated operating system and may not have the necessary tools and resources to develop modern applications.”

Yeo says to his audience, “Okay, so, challenge accepted…”

Meng’s made several updates since then. MS-DOS predates the UTF-8 character set (used by JSON), so Meng also coded up a massive lookup table for every unsupported character himself.

In response to a question, Meng says “I tried ChatGPT! it doesn’t understand what I’m trying to do!” But it was able to suggest UTF-8 characters for testing…

 Yeo Kheng Meng talk at Engineers SG - ChatGPT tests getting DOS to display UTF-8 characters

Meng even added a flag to his app — command-line argument -cp737 — which switches output to the code page 737 character set for users who speak Greek.

Some people had suggested he add a text-to-speech function, so it would resemble the super-smart Defense Department computer from the classic 1983 movie Wargames.

So on May 11, Meng did just that.

Back to the Future

But more importantly, he learned a lot along the way — about UTF-8, network sockets, and console handling. And Meng says he’s genuinely surprised by how much of it is still applicable to computers of today.

“Computers have improved a lot in decades but a lot concepts have stayed the same,” Meng posted on Linked in, saying that the whole experience of building a ChatGPT client that can run on DOS “really gives you the appreciation of the well-developed tools we have today.”

And because he’d open-sourced his code, Meng ultimately got to discover just how many people used the software he wrote for ancient retrocomputing machines. “You’d imagine the market for vintage computers isn’t that big,” Meng told his audience at the end of his talk — “but surprisingly, a lot of people use it. And they file GitHub issues that tell me the bugs that my program has!”

Meng recently added an important update to his blog post. Instead of interfacing solely with ChatGPT, “This client now supports Hugging Face too.”

We’d wanted to interview Meng — but Hugging Face seems to have beaten us to it, in a lovely conversation that Meng shared on YouTube. And so the experiment ended with Meng sitting at his 39-year-old computer, chattering away with 2023’s latest AI chatbot…

Hugging Face: Do you have any hobbies you like to do?

Meng: Yes. One of my hobbies is retrocomputing.

Hugging Face: Oh, that’s really cool. I’ve always wanted to learn how to do that. How long have you been doing that?

Meng: For many years now. One of my machines is an IBM5155 which I’m using to chat with you… Guess how old it is?

Hugging Face: I’m not sure, but it must be pretty old. It’s been around since 1958.

Meng: Nah, it was first sold in 1984.

Hugging Face: Oh, I didn’t realize it was that old. I thought it was older than that…

The post Maker Builds a ChatGPT DOS Client for a 1984 Computer appeared first on The New Stack.

]]>
Developers Can Turn Turbulent Times into Innovation and Growth https://thenewstack.io/developers-can-turn-turbulent-times-into-innovation-and-growth/ Wed, 31 May 2023 16:53:38 +0000 https://thenewstack.io/?p=22709610

When dealing with uncertainty, a knee-jerk reaction for companies is to make cuts, especially in R&D and innovation. If the

The post Developers Can Turn Turbulent Times into Innovation and Growth appeared first on The New Stack.

]]>

When dealing with uncertainty, a knee-jerk reaction for companies is to make cuts, especially in R&D and innovation. If the last global crisis has taught us anything, it’s that this kind of reaction stunts growth and opportunities in the long run. While developers seem to be only on the receiving end of those changes, I’d argue they are the ones who can use instability to fuel innovation and growth.

While unavoidably challenging for all involved, times of crisis also present opportunities to:

  • Assess current processes and systems;
  • Eliminate entrenched orthodoxies;
  • Implement a sense of urgency, fostering a more collaborative and agile approach.

If It Works – You May Still Want to Fix It

In the spirit of crisis-driven development, we recently analyzed the chain of services used to send RCS (rich communications services) and MMS (multimedia messaging service) messages in Infobip. We found additional proxy services in the chain that adapt traffic to a specific operator.

Incidentally, the service we use for sending SMS (short message service) and other types of messages already provides extensive functionality for customizing HTTP requests for various protocols. We unified the process with SMS connections, which our integrations and solutions engineers were already well familiar with.

In doing so, we reduced the number of moving elements, the number of resources required to maintain RCS and MMS connections, and, most importantly, the time required to create new connections from several weeks to a few days. This, in turn, enabled us to introduce new solutions to the market rapidly.

We’d probably never revise the existing solution had we not gone into the crisis-leveraging mode, as it generally functioned well, and the process was thoroughly fine-tuned.

I was amazed at how seamlessly this initiative aligned with the company’s operations and how timely it was. Other teams readily embraced the changes, enabling them to concentrate on developing customer-facing features.

The example illustrates crisis-driven development where existing systems are reevaluated, traditions removed and urgency instilled to foster an agile and collaborative environment.

This approach values refactoring and pragmatic new development, with an emphasis on economic impact.

Refactoring the Right Thing

Refactoring is a powerful method for achieving better design curves and enhanced performance. Instead of centering the argument around quality, clean code or doing the right thing, in crisis-driven development developers should emphasize the economic impact.

It’s essential to determine whether refactoring makes a difference in delivering new features and can help accelerate development.

When deciding what to refactor, prioritize hot spots:

  • Code that exhibits obvious smells and has frequent changes;
  • Multiple developers are working on it;
  • There is just one main contributor to specific code sections.

Focusing on the fast-changing parts of the code first is important because these sections are often the most complex and prone to errors. By refactoring them, developers can simplify the code, making it easier to understand, maintain and modify. Since these areas are frequently updated, any improvements will have a more immediate and significant impact, leading to enhanced productivity and software quality over time.

Adopt a simple approach when refactoring, reducing moving parts and revealing intentions. You want the code to read like a coherent story, not of a complicated detective narrative.

Adhere to the Single Level of Abstraction principle by ensuring that each method consists of instructions at the appropriate level of detail or abstraction. This promotes single responsibility and a single reason for the change. Additionally, aim for high cohesion and loose coupling in the code. This way, the code serves one function exceptionally well and relies on fewer stable components.

Pragmatic New Development

New development is crucial but should be done pragmatically, seeking fast feedback and focusing on practical solutions that can be implemented quickly and effectively. Prioritize actions and results to address immediate needs.

For instance, we use a hybrid cloud infrastructure. This popular approach combines our own infrastructure with cloud-based solutions for prototyping and launching new products, particularly those using AI.

The hybrid scheme allows for both flexibility and control, making it an ideal choice for organizations seeking to adapt and innovate rapidly while keeping costs under control.

To navigate new development successfully:

  • Use tools that enable you to deliver value quickly.
  • Be conscious of the various trade-offs involved in your decisions.
  • Make costs visible, ensuring all stakeholders understand the financial implications.

This means you should carefully consider what you spend time and resources building in-house and what you source from outside vendors. If you’re a startup looking to boost your support, you can use the communication platform as a service (CPaaS) to integrate these services rather than spending time building these features from scratch. That leaves your developing capacities available to build functionalities relating to your core business. Using developer tools rather than building them in-house also has its trade-offs to consider — you are, after all, relying on a third-party service.

Weigh well whether the trade-off is worth the value it gives in return, in this case, manpower to focus on your core business. And lastly, since a third-party solution is a financial cost, ensure all stakeholders are aware of and comfortable with it, and that there is a transparent and simple way to track costs at all times.

If the cost-benefit analysis in all three categories turns in favor of building in-house, that’s by all means a pragmatic new development. If not, consider alternatives to building from scratch.

Don’t Survive — Adapt and Thrive

The ability to quickly adapt and find innovative solutions is critical in times like these, but it is equally important to remain grounded in the key principles of software development.

I believe that with a deliberate and structured approach, software developers can navigate the challenges of a crisis with confidence and deliver impactful solutions that truly meet the needs of their users.

As former GitHub CEO Nat Friedman once said, “Pessimists sound smart. Optimists make money.” Embrace optimism and use crisis as an opportunity for growth and innovation in software development.

The post Developers Can Turn Turbulent Times into Innovation and Growth appeared first on The New Stack.

]]>
Bluesky vs. Nostr — Which Should Developers Care About More? https://thenewstack.io/bluesky-vs-nostr-which-should-developers-care-about-more/ Tue, 30 May 2023 14:51:11 +0000 https://thenewstack.io/?p=22709450

We’re in a new golden age of decentralized versions of Twitter. Mastodon (an open source project built on the ActivityPub

The post Bluesky vs. Nostr — Which Should Developers Care About More? appeared first on The New Stack.

]]>

We’re in a new golden age of decentralized versions of Twitter. Mastodon (an open source project built on the ActivityPub protocol), Bluesky (a company building the AT Protocol) and now Nostr (an open protocol project) are all attempting to fulfill the promise of Twitter in 2007, when it was erroneously called an open platform.

Intriguingly, each of these three projects is coming at the problem of social media from a slightly different angle. Add to the list Scuttlebutt, an open protocol that I profiled in 2021, and there are now multiple open protocols challenging the proprietary software of Elon Musk’s Twitter.

Because it’s probably the least familiar, let’s start with Nostr. In a recent interview, long-time social media developer Rabble (a.k.a. Evan Henshaw-Plath) explained that he has moved his focus from Scuttlebutt to the Nostr protocol. Rabble’s product was formerly a decentralized social network product called Planetary, which had been built on Scuttlebutt. But in March, Rabble announced a pivot to Nostr, along with a new product called Nos — a Nostr client app based on Planetary.

Rabble also commented on the differences between Nostr and Bluesky (which is basically a Twitter clone at this point). One of the appeals of Nostr, he said, is the flexibility to be able to create his own app. “Bluesky is real, but I don’t know if we’re gonna be able to make Bluesky flexible in the way that Nostr has all of these apps,” he said. “So, Scuttlebutt had all these crazy apps, and Nostr has all these crazy apps.”

“The servers in Bluesky are stronger and more opinionated in how they do it, to just do the needs of cloning Twitter,” he continued. “And so that’s going to make the building of all these other crazy apps, which is part of the fun of Nostr, really hard — but it might make the just straight Twitter social-like app easier.”

Nos

Nos, Rabble’s beta Nostr app.

Part of Rabble’s reluctance to embrace Bluesky (although he is by no means against the project) is that it isn’t yet as open as Nostr. The Bluesky protocol, AT Protocol, has been developed largely in-house so far, so external developers like himself have had limited say in its future direction. Also, the way Bluesky has been implemented so far has — perhaps by necessity — been less decentralized than Rabble would like. In particular, he points out that sign-in on Bluesky is not decentralized.

“It’s as if everybody in the network were using the same key, and then we just attach different identities to it,” he said, “and so that’s not a decentralized network.” He added that the company, Bluesky, promises in its terms of service that the key is “yours and you can move it to another server,” but he wants to see it before he believes it.

Bluesky for Devs

Bluesky, the product, is currently in private beta — just today, it broke the 100,000 user mark. However, more than half of that total have yet to post more than once (see image below), so the active community is more like 40-50,000.

Bluesky stats

Bluesky stats, 30 May 2023; via Jaz.

I am one of the privileged people currently on the service and, by current standards, I am pretty active there (54 posts at the time of writing). So far, Bluesky has reminded me a lot of early Twitter. The nascent community tries to have a fun vibe — “shitposting” is encouraged on Bluesky — which is a deliberate contrast to the more earnest Mastodon community. The user experience is also a lot more polished on Bluesky than on Mastodon, so the early indications are that it has a better chance of ultimately challenging Twitter, once Bluesky is opened to the public.

Not everyone is enamored of Bluesky, though. Jack Dorsey, who initiated the Bluesky project in late-2019 as a project within Twitter (it was later spun out as an independent company) has been critical of Bluesky this year. “Unfortunately they went a bit too hard on focusing on a Twitter product and not developer community,” he wrote on Nostr in April. Nostr has seemingly become Dorsey’s favored social media account.

The same day Dorsey made that comment, Bluesky (the company) published a blog post about the AT Protocol developer ecosystem. Bluesky CEO Jay Graber claimed that there are already “many projects building on the ‘firehose’ of Bluesky app data” and that its development philosophy is “to build on existing software stacks that make interoperability simple.” She pointed out that “the at proto ecosystem” uses IPFS dev tooling, the DID specification for the identity layer, and an API that is “well specified and is simply JSON over HTTP.”

A month later and there is now a healthy list of AT Protocol projects, which at first glance bears some similarity to the third-party projects built on top of the late-2000s Twitter API.

Bluesky

Bluesky in May 2023, while still in private beta.

This does sound promising, but as Rabble pointed out, the AT Protocol developer ecosystem is fairly tightly controlled by Bluesky at the present time. Although it is all open source (unlike Twitter, even in its early days), we don’t yet know what kind of centralized pressure Bluesky (the company) might exert in future.

Indeed, one of the criticisms of ActivityPub is that Mastodon — by far the biggest project running on the protocol — might have an undue influence in the further development of ActivityPub. So there is a danger that a similar risk presents itself in the AT Protocol, with Bluesky dominating proceedings.

Why Should Devs Care About Nostr?

A key benefit that developers like Rabble see in Nostr is that there is little apparent risk of power coalescing in a centralized project (like Mastodon) or company (like Bluesky).

According to its GitHub page, the protocol “doesn’t rely on any trusted central server, hence it is resilient; it is based on cryptographic keys and signatures, so it is tamperproof; it does not rely on P2P techniques, and therefore it works.”

The name of the project is an acronym for “notes and other stuff transmitted by relays.” Relays are servers, but they aren’t massive hubs like on Mastodon (at least when we’re talking about the main servers, like mastodon.social) or Bluesky. “To publish something, you write a post, sign it with your key and send it to multiple relays (servers hosted by someone else, or yourself),” states the Nostr documentation.

Nostr relays

Nostr relays.

Rabble described the Nostr architecture as “small pieces loosely joined.” When he talks about why he chose to pivot to Nostr for his decentralized social media product, Rabble sounds distinctly nostalgic.

“So when I started working on decentralized social, I wanted us to go back to the world of the Facebook app platform and the Twitter API, where all sorts of developers were coming up with all sorts of crazy ideas and you didn’t need permission to do them.”

I assume he’s referring to the early years of Facebook and Twitter, but in any case, he sees a similar permission-less environment currently in Nostr. “I want that easy space by which we can have a lot of different apps,” he said. “And at the moment that exists on Nostr, but doesn’t on Bluesky — […] until the point at which they [Bluesky] don’t have a choke point to turn it off, then you can’t trust it.”

I’ve played with Rabble’s new app, Nos, and it was easier to use than another Nostr client I’d tried. That said, the Nostr network is still difficult to get your head around and so far I’ve yet to make any social connections on there. The content I’ve seen on the network has a strong libertarian bent, indicative of its roots in the Bitcoin community (both the creator, known as fiatjaf, and Dorsey are known Bitcoin proponents).

Conclusion

It’s too early to say which of AT Protocol or Nostr (or ActivityPub) is better for developers when it comes to evaluating decentralized social media protocols. Bluesky certainly is the most likely to challenge Twitter, but will it come at the expense of less control for developers? Nostr looks the most flexible of the protocols, and the ‘safest’ in terms of developer control, but it’s also the least likely to reach a mainstream user base.

Ultimately, it depends on what your goals are as a developer. If you’re aiming to reach a wide user base, Bluesky or Mastodon are your best bets. But maybe, like Rabble, you’d rather have complete control over your app’s destiny.

The post Bluesky vs. Nostr — Which Should Developers Care About More? appeared first on The New Stack.

]]>
Meet The Hobbyists Building Their Own DIY Cyberpunk Devices https://thenewstack.io/meet-the-hobbyists-building-their-own-diy-cyberpunk-devices/ Mon, 29 May 2023 13:00:29 +0000 https://thenewstack.io/?p=22708968

Back in the 1980s, William Gibson’s science fiction novels envisioned a coming dystopian future where cyberspace was accessed with head-mounted

The post Meet The Hobbyists Building Their Own DIY Cyberpunk Devices appeared first on The New Stack.

]]>

Back in the 1980s, William Gibson’s science fiction novels envisioned a coming dystopian future where cyberspace was accessed with head-mounted interfaces. In his 1982 story “Burning Chrome,” two hackers use them for “casing mankind’s extended electronic nervous system, rustling data and credit in the crowded matrix.” (While “high above it all burn corporate galaxies and the cold spiral arms of military systems…”)

Burning Chrome book cover (includes Red Star Winter Orbit)

“Burning Chrome”

But here in our own real-world future, enthusiastic hobbyists are now trying to make it all come true — or at least, jerry-rigging their own home-brewed “cyberdecks” for accessing the internet.

It’s the ultimate project for cyberpunk fans: cobbling together their own gear using repurposed leftovers and cheap surplus parts, plus all the right components from suppliers catering to makers.

But instead of cracking corporate data silos with a tricked-up Ono-Sendai “Cyberspace VII” (as William Gibson imagined), these enthusiasts are just sharing their creations on social media for bragging rights, and to celebrate their own maker successes. And like any home project, they also always seem to be learning an awful lot about technology.

It’s inspiring and it’s exciting. And it also looks like it’s a lot of fun…

Sunglasses at Night

For a head-mounted solution, some cutting-edge makers are now experimenting with the newly released Nreal Air (renamed Xreal) sunglasses, which come equipped with a small built-in (micro-OLED) screen. A USB-C cable connects them to your computer or smartphone.

Marketed as “AR glasses,” they display output from the company’s “spatial internet” app (currently available on “select” Android devices). But the glasses can also function as a head-mounted display, according to their website, transforming a laptop or monitor into what’s essentially a “cinema-sized 201-inch screen.”

And UK-based futurist Martin Hamilton calls new products like these “the real breakthrough” for finally jerry-rigging your own cyberdeck. Hamilton says in an email interview that Nreal’s micro-OLED screens can give cyberpunk makers a full HD display “with a decent field of view.”

UK-based Martin Hamilton made a cyberdeck with Nreal Air sunglasses powered by an old ThinkPad

UK-based Martin Hamilton made a cyberdeck with Nreal Air sunglasses powered by an old ThinkPad

“If you’ve used a VR headset then you’re probably expecting something similar — like strapping a phone to your face. These are different because the glasses weigh very little (79 grams, or around three ounces), due to all the clever stuff happening on your phone or computer. In particular, there’s no battery, as the glasses are powered by the same USB-C cable which feeds the video from your device.”

To create his own home-brewed cyberdeck, Hamilton bought a pair of the Nreal Air glasses, then hooked them up to a five-year-old ThinkPad laptop with a broken screen. “Right now this really feels like a hacker’s device,” he said in an email (which he composed using his home-brewed cyberdeck).

“ThinkPads are pretty good for this kind of thing because they’re designed to be repairable,” Hamilton wrote. After unscrewing the screen’s hinges to remove it — and detaching its cables — it’s a self-contained unit “without any unsightly gaps.”

Instead of wearing the sunglasses over his prescription eyeglasses, he was even able to purchase prescription lens inserts from Nreal’s official partner.

Hamilton shared his adventure with other DIY-cyberpunk enthusiasts in Reddit’s Cyberdeck subreddit. (“The era of virtual reality is coming,” says the subreddit’s description, “so it is also time for cyberdecks to come…”)

He’s calling his new ThinkPad-derived cyberdeck a “ThinkDeck,” telling the forum that he’s been “using the glasses as a big head mount display,” for everything from coding and sys-admin work to sending email, surfing the web, and watching videos. (“You wouldn’t want to wear the glasses for more than about an hour at a time, but then you should probably be getting a screen/movement break at this point anyway.”)

ThinkPad + Nreal Air = ThinkDeck
by u/martin_hamilton in cyberDeck

There’s also practical considerations. Hamilton wonders if governments and corporations will demand their staff start using these eyeglass-based interfaces (with no screens) for the extra privacy. In a world where biometric fingerprint scanners already control access to data on encrypted partitions, wouldn’t this be the next logical step?

“You can just plug in a screen when you want one,” Hamilton said — for example, by connecting a projector for “a wall-sized display that other people can also see.”

And yes, he told me, it does feel like something out of a William Gibson story. Writing code in Linux, “my field of view is full of terminal windows and debug output,” Hamilton writes, adding that this “seems appropriately cyberpunk.”

In William Gibson’s novel Neuromancer, the protagonist named his cyberdeck “Hosaka” — so Hamilton has done the same.

“It was either that or Ono-Sendai Cyberspace VII, but that’s a bit of a mouthful…”

A Yearned-for Future

Hamilton isn’t the only one home-brewing their own technology. Belgium-based Ken Van Hoeylandt has built his own tiny handheld PC by crafting a custom 3D-printed case for his Raspberry Pi CM4 (and a “Raspberry Pad” screen from BIGTREETECH), hooking everything up to a modified BlueTooth mini keyboard.

Decktility – An open source/hardware handheld PC
by u/ByteWelder in cyberDeck

In Detroit a turntablist and music producer named “DJ Vulchre” has been uploading videos of their own home-brewed cyberdecks — the latest made with a GOLE1 Pro pocket-sized PC and a lens the magnify the interface for their music software.

And Hong Kong-based YouTube Winder Sun has built his own small pocket PC. He started with an 8-inch touchscreen display from component vendor Elecrow, then mounted it in a hollowed-out portable radio case — along with a keyboard and a small portable charger.

Sun is now proudly using it to write the code, including mods to the space exploration game No Man’s Sky. In the video, he jokes it’s “a Cyberdeck That Should Go Straight to E-Waste” but adds that it “feels like a very cyberpunk thing to do…”

“This thing is the jankiest thing ever, and I love it. Working with this in a dilapidated concrete jungle was a delight… At least I look like a cyberpunk now, and in my sick twisted mind it’s worth everything.”

`

“I think the cyberpunk community yearns for this future we never got,” tinkerer Brendan Charles told GameSpot last month, “and making these kinds of projects allows us to make it a reality.”

Charles built a battery-powered micro-PC out of a 1990s-era “Talking Whiz Kid” toy, learning everything he need to know along the way bout soldering, sanding, painting, and 3D printing — and even some basic electronics. “You can find premade modules and connectors to do almost anything you want, from LCD displays, to controllers, to battery packs,” Charles tells GameSpot.

My quarantine project: The Ceres 1, a battery powered portable PC
by u/ThisIsTheNewSleeve in cyberDeck

Hackaday-Prize - promotional poster

Hackaday-Prize – promotional poster

Gamespot described Charles as part of “an entire community of talented builders using tiny computers like the Raspberry Pi to build the cyberpunk setups of their dreams.” And the tech-projects site Hackaday even has its own section dedicated to homemade cyberdecks. It notes a popular feeling that a true cyberdeck should be “a custom rig built up of whatever high-tech detritus the intrepid hacker can get their hands on.”

And along those lines, the site recently featured a maker who created a mobile satellite-monitoring station from a touch-screen computer from a police cruiser in the early 2000s.

A home-brewed pocket computer also placed in an early round of the Hackaday prize competition (which culminates with a $50,000 prize in September) when maker Spider Jerusalem wrapped a 3D-printed case around a Raspberry Pi (4) board connected to an LCD screen and a full QWERTY keyboard. “It’s a useful tool if you need to interface with a server on the go or do some low-level network diagnostics without carrying a whole laptop around,” Hackaday suggested.

When you’re brewing your own technology, the possibilities are endless. A maker named “Frumthe Fewtcha” even built a ChatGPT-enabled smartwatch that could answer any question, according to their recent video on YouTube. The answers appear as text on the watch’s display — and are also piped as audio into earbuds.

Writing from his home-grown cyberdeck, Hamilton said he felt like we’ve finally achieved a piece of that future that we were always promised. “In the almost 40 years since Neuromancer was published it feels like the world has caught up with William Gibson’s imagination, from mRNA-based gene editing to Large Language Models that seem almost sentient.”

But he also believes there’s also some practical advantages to a world where you can build your own head-mounted cyberdeck. “I’ve also spent a lot of those 40 years hunched over laptop screens, and it’s really liberating to be able to move your head around to wherever is comfortable!”

The post Meet The Hobbyists Building Their Own DIY Cyberpunk Devices appeared first on The New Stack.

]]>
Bad by Design: The World of Intentionally Awful User Interfaces https://thenewstack.io/bad-by-design-the-world-of-intentionally-awful-user-interfaces/ Sun, 28 May 2023 13:00:21 +0000 https://thenewstack.io/?p=22708690

They’re funny — and strangely relatable. But also thought-provoking, if not downright educational. And, lately, these intentionally bad interfaces have

The post Bad by Design: The World of Intentionally Awful User Interfaces appeared first on The New Stack.

]]>

They’re funny — and strangely relatable. But also thought-provoking, if not downright educational.

And, lately, these intentionally bad interfaces have been provoking a new round of laughter on Twitter

Because it’s 2023, one of the most insightful reactions was apparently even generated by an AI bot.

“Well, at least they’re finally acknowledging that engineers are responsible for the worst user interfaces on the internet.

“Maybe next they can have a competition for who can create the most confusing automated customer service system…”

But it turns out the comical controls are all part of a long-standing meme — a kind of internet running joke with a dire yet light-hearted warning for our times.

“I build user interfaces for a living, and my primary source of inspiration during a 20-year-long career has been bad interfaces,” said Swedish frontend developer and interface designer Hakim El Hattab. It’s all a reminder that frustrating consumer experiences can reach cosmically comic proportions. That we’re all at the mercy of engineers building our user interfaces.

And that our programming tools are now powerful enough to let bored internet satirists dream up alternate interfaces whenever they feel like it…

Challenge Accepted

All the bad design craziness apparently started back in 2017 in Reddit’s “Programmer Humor” subreddit, when a user posted a tall green volume bar that needed to be dragged… horizontally. “Who can make the best volume slider? the post had asked.

And the message was heard, the challenge accepted…

“It was wild,” remembered a recent comment on Reddit. “For weeks the subreddit was just terrible volume controllers.” After 11,000 upvotes and 403 comments, the original discussion thread was permanently archived (and closed for new comments).

But then dozens of new discussion threads were started for each new, bad volume-control interface.

GPS Volume Control
by u/mrzacharyjensen in ProgrammerHumor

Over the years the best ones have been featured in writeups by sites like Bored Panda and the Verge. Back in 2017, designer Fabricio Teixeira even collected them together for a post on his design blog UX Collective, calling their attempts “a fun exercise/joke, that can strengthen one’s creative muscles and ability to think outside of the box.”

Yet these curated collections only scratched the surface of the original event, missing the scope of just how many more different volume controls were created. Even the ones circulating now on social media had each offered their own tantalizing headlines like:

There were many more — and all of them were oh-so-creatively bad. “It’s like a daily Far Side comic, with a volume theme…” one Redditor had posted at the time — adding “I kinda hope it goes on forever.” But along the way, maybe this spontaneous outpouring also offered some grassroots meta-commentary about our own over-engineered world…

Implicit Critiques

One poster imagined a volume control interface that subjected its input to the usual checks for password strength. (“Your volume must have at least six characters… Your volume must contain at least one uppercase letter… “)

Another imagined a volume control whose upper bound was…the amount of charge left in your battery. (“Please charge device to increase volume…”)

Running low on volume
by u/fluiux in ProgrammerHumor

And then there was the seemingly innocuous volume controller that demanded of its users, “To change volume to 35%, please solve CAPTCHA below…”

The blog post on Teixeira’s site argues that “there’s definitely a reflection point about the state our industry here.”

In short, Teixeira believes designers today often feel the urge to innovate (not mention professional pressures) — and that’s met with the easy availability of tools. But that doesn’t mean that innovation needs to happen, Teixeira writes.

“Let’s be honest: the volume control design pattern has been around for decades, works pretty well for the majority of users, and is incredibly familiar to a lot of people. You don’t need to reinvent it.”

“People expect interfaces to look and behave in a predictable way,” acknowledged UI designer Hakim El Hattab — before emailing me two of their alternate versions of a checkbox. “I think it’s a lot of fun to break the rules and try to surprise people.”

But even as the original meme offered its implicit critique of the design industry, there were also some moments that were oddly educational for programmers. One maker actually built their volume control in the real world — using an Arduino that adjusts the volume based on the amount of heat detected by a temperature sensor.

And one stripped-down user interface even needed the desired volume to be spelled out in Morse code.

Single button volume interface
by u/LinAGKar in ProgrammerHumor

That same thoughtful spirit continues to this day — even as the phenomenon has become almost a Reddit institution. There’s now a subreddit on the site dedicated just to “Intentionally bad User Interfaces,” which was “spawned” from the “ProgrammerHumor” subreddit and its “constant need to find bad UI then make more of it” (according to the subreddit’s official description).

Named “badUIbattles,” this subreddit now has 203,000 members committed to creating “bad UIs just for the sake of them being bad.”

And yet to this day, there’s also a tradition of encouraging posters to share their source code. “I think it’s important to share the source wherever possible so that others can learn,” Hakim El Hattab told me. “I learned that way myself and it’s nice to pay it forward.”

Spreading to the Web

In the new subreddit, El Hattab recently shared their own perfectly maddening interface that perhaps sends a second message: That the people who make “unsubscribe” buttons secretly don’t want you to.

Working on my new unsubscribe page
by u/hakimel in badUIbattles

A hint may be hidden in its URL: clickmeifyoucan.

But part of the subreddit’s charm is reading the comments, where people post their honest reactions, marvel at the UI’s ingenuity and share a laugh.

“Thanks, I hate it”

“In germany, stuff like this would be illegal”

And then there was that one user who wrote sardonically…

“I would like to subscribe to your newsletter…”

So the tradition continues as the years roll along, and the new subreddit even inspired Brazil-based physics engineer (and self-taught programmer) André Goulart Nogueira to create a web repository for all the “best (or worst?) bad-UI I’ve seen.”

And Nogueira’s own creations also appear on that page, including an interface for submitting your phone number which uses nothing but a sliding selector that scrolls through… every possible phone number. (Selecting “Advanced Mode” even activates a second slider which moves the first slider — but only if you tilt it just right…)

Meanwhile, Nogueira’s “birthday selector” also seems deceptively easy — until you realize it wants you to scroll through every day of your life until you reach the exact date of your birth. (Although two additional options let you start with the year 1 A.D. — or with the year 9999…)

And over in the badUIbattles subreddit, another user shared their even more diabolical creation: the notorious “tabbed keyboard”. Implemented with some simple JavaScript (in an HTML/CSS webpage), its distinguishing feature is really its lack of a keyboard. (You can try it yourself in a web browser.) The “Enter User Name” window just contains a single key — marked with a “plus” symbol — that when clicked will create a second window with exactly one more key — Which you can now use only for typing its one randomly-assigned letter of the alphabet.)

Tab back to that first window — the plus sign — and you can click the “plus” key again to create another window with a key for typing one more letter… Then continue until you’ve randomly received enough letters to type out your entire user name. (Plus, the additional “submit” key you’ll actually need for entering that name…)

The punchline? The user interface is programmed to then tell you “Username already taken. Please pick another.”

The creator joked on Reddit that it’s good for people with poor eyesight — since to view each single key, they can use the entire screen.

And again, comments of appreciation flooded in.

“God it’s awful… good job.”

“This is even worse on mobile and I love it.”

One commenter even sarcastically applauded the interface for “ensuring the screen isn’t cluttered by any functions you don’t need at that exact moment. Truly the future.”

“When will user experience jokes not be funny? Probably never,” quipped the Verge, “as services/products continue to be in a constant race with themselves to make things ‘better’ while often neglecting how we interact with them…” And sure enough, back in Reddit’s original “ProgrammerHumor” subreddit, yet another intentionally-bad volume control interface appeared earlier this year.

Though this one appears to be more of a joke about Vim.

I’ve suddenly remembered the old challenge to make the worst volume slider, so here’s my entry. Unexitable
by u/sicve011 in ProgrammerHumor

But maybe satire is the sincerest form of criticism, pointing a way forward to a better world. “It’s incredibly frustrating when a simple task is made difficult by a poorly designed interface,” Hakim El Hattab told me.

“This frustration with bad interfaces has taught me what to avoid and enabled me to create more user-friendly interfaces at work.”

The post Bad by Design: The World of Intentionally Awful User Interfaces appeared first on The New Stack.

]]>
Is Open Source the Original Product-Led Growth? https://thenewstack.io/is-open-source-the-original-product-led-growth/ Thu, 25 May 2023 14:32:59 +0000 https://thenewstack.io/?p=22709030

Take a journey with me back to December 2022. I’m in job-hunting mode, and in interviews the term “PLG” comes

The post Is Open Source the Original Product-Led Growth? appeared first on The New Stack.

]]>

Take a journey with me back to December 2022. I’m in job-hunting mode, and in interviews the term “PLG” comes up. I haven’t heard the term before, so after a quick Google search I learn that “PLG” stands for product-led growth, and it’s been around awhile. I read some articles and the more I learn about PLG, the more I realize that this is the open source software use model I’ve been working with for nearly a decade. Wow, I didn’t know it had a name!

To give you a little history, according to a blog post at OpenView Partners, the term “product-led growth” was originally coined in 2016 by Blake Bartlett at the venture capital firm, “although the principles that define it had been around before that.” It started between 2012 and 2014, when Bartlett saw that when promoting products, product-market fit was only part of the battle. We need to be obsessed with product distribution too.

“Great companies pay close attention to how to remove friction and turn their product into a marketing asset,” the blog post states.

Pivot Back to Open Source: Part 1 — Developers

I’ll be clear, open source software was not started to support a PLG model. Open source software has a set of benefits that we all know and live by.

What I am saying is that in the organizations I’ve worked at, bringing in users at the open source software level is a great first step in giving users hands-on experience with the technology.

Let me give you another reference point. Stephen O’Grady’s book, “The New Kingmakers: How Developers Conquered the World,” “explores the rise of the developer class, its implications and provides suggestions for navigating the new developer-centric landscape.” (If you haven’t read it, you should!) To summarize, developers are the most important asset organizations have. With the availability of open source and free versions of software, they go out and find the tool they need. They don’t ask, they just download and start using it. Or they take the technology they find, build upon it, contribute their enhancements (or not) to the open source project and use it to make their day-to-day tasks better.

Then organizations figure out they need to give these highly talented developers, DevOps teams and operators (collectively “practitioners”) the freedom to get what they need to do their job. If they don’t, these talented individuals will go to an organization that will let them do their job, and your organization will be stuck with the super-hard problem of replacing that talent.

So the practitioners in an organization are the people having a huge influence on the technologies an organization is using.

Pivot Back to Open Source: Part 2 — Technology Decisions

We know that with open source software, anyone can download and use it. They can fork it, contribute to it, deploy it in their environment. Use the software, ingrain it in their environment, and when it comes time for the CIO to say, “Hey we need something that does <this>,” the practitioner tells them they have <this thing> they’ve been using, it solves <that> problem, and they love it. The CIO says great, and the technology is blessed.

Then, maybe the CIO says, “Maybe we should get support with this so you can focus on your job and not have to keep this thing up to date in our infrastructure.” The practitioner says great but may be a little sad because they don’t get to use and contribute to open source; however, they also realize that there are other open source projects that are pretty nifty. The CIO calls the 1-800 sales number or fills out a form, and someone in sales does the paperwork to sell it to the organization.

How Is Open Source Like Product-Led Growth

This simplified example is product-led growth in a nutshell. The practitioner looks for a tool that will do something in a better way than the roundabout way they had been doing it. They find an open source project that works for what they need, and they become a fan of the technology. In PLG, the practitioner finds the free version of the software and … well, you get the picture.

Which brings me back to PLG. I was brought into my current organization because I know open source, I have worked with developer communities, and I know how to build a community of users. What I think about is education, creating content the practitioner audience cares about, giving individuals a sandbox to play in, having a clear and user-focused journey for getting started with the technology, and then figuring out where these individuals are getting their information and being in those places. Marketing 101.

In Summary

The practitioner audience — developers, operators, DevOps, DevSecOps and contributors — is the people we should focus on. They are the people for whom we are developing technology — to help make their cloud native environments easier to use, more secure, faster to get information out of — all of those and more. When we let go of control and put the technology decision in the hands of the users, people will choose to use something because it’s good.

Learn more about Cisco Open Source and join our Slack community to be part of the conversation!

The post Is Open Source the Original Product-Led Growth? appeared first on The New Stack.

]]>
AppSec ‘Worst Practices’ with Tanya Janca  https://thenewstack.io/appsec-worst-practices-with-tanya-janca/ Wed, 24 May 2023 15:08:00 +0000 https://thenewstack.io/?p=22708943

At the recent RSA 2023 conference in San Francisco, Tanya Janca presented an only slightly tongue-in-cheek keynote speech covering fifteen

The post AppSec ‘Worst Practices’ with Tanya Janca  appeared first on The New Stack.

]]>

At the recent RSA 2023 conference in San Francisco, Tanya Janca presented an only slightly tongue-in-cheek keynote speech covering fifteen “worst practices” committed by DevOps teams. As she noted in introducing her talk, “Quite often when we read best practices we are told ‘what’ to do, but not the ‘why’. When we are told to ensure there are no false positives in the pipeline, the reason seems obvious, but not every part of DevOps is that intuitive, and not all ‘best practices’ make sense on first blush.”

Janca, who goes by SheHacksPurple, is the best-selling author of ”Alice and Bob Learn Application Security” and the founder of We Hack Purple, an online learning community that revolves around teaching everyone to create secure software. Janca has been coding and working in IT for over 25 years, has won countless awards and has been everywhere from public service to tech giants, writing software, leading communities, founding companies and “securing all the things.”

After her keynote, Janca sat down with Taylor Armerding of Synopsys for an episode of AppSec Decoded where they did a deep dive into four of the most crucial “worst practices” that organizations want to avoid when building a robust DevSecOps program.

Q: What do you mean by runaway testing? And why is it a problem?

So, basically, all the people that do DevOps, they want to go fast. And let’s say I buy a new analysis tool to put in my pipeline. Best practices would be that I make sure to time it, to check that it’s actually fast. Then I need to tune it, adjust it, play with it a lot, so that I can make sure it’s testing what it needs to test and doing that fast. The worst practice, and I’ve seen this happen, is where people just put the tool directly into a release pipeline live. Maybe it works right out of the box, but what’s much more likely is that the test takes like six hours. This means you’ve clogged up your pipeline, and you’re wasting other people’s time because no one can get any work done. You’ve monopolized all the resources by not testing your tools first. This can make you really unpopular.

Q: Why is only worrying about your part a “worst practice”?

This isn’t necessarily a technical thing. It applies to just about any workplace. I used to be a software developer, and I would kind of just throw my app over the wall to Ops and be like, “Have fun with that! Works on my machine!” but in reality, DevOps is all about emphasizing the speed of the entire system, and not just your part. Unfortunately, developers experience security as someone who comes in like three-quarters of the way through a project and says, “By the way, over the next two weeks we’re testing this.” If you’re a developer, this delay is not on your schedule, and you still have to make your deadlines. So, security becomes something that you see as impeding your work. To get to true DevSecOps, it’s really really important that we actually worry about other people’s parts in the development cycle and how our parts fit into their activities so we can work together.

Q: What makes hiding mistakes and errors a “worst practice”?

So this is a hard one for me. Someone had to teach this lesson to me, and while I’d like to pretend I’ve never done any of the 15 worst practices, this is the one that was the hardest lesson. I was having a lot of trouble getting developer buy-in from this one team. The manager told me I was wasting everyone’s time with “that security stuff” and so my boss said, “We have to connect with this team.” We’d had a giant security incident with one of their apps, so he called a closed-door Chatham House Rules meeting with them. It was an SQL injection incident, and we found out our data was for sale on the dark web for $50! First of all, that made us feel bad because we’re worth more than $50, and even worse, we found out when the media wanted a comment. My boss is telling them this and the team is being sympathetic, and that’s when he tells them, “That was your app and Tanya tried to test it, and you wouldn’t let her. It cost us hundreds of thousands of dollars and who knows what it’s going to do to our reputation because you won’t listen.” I was just like redder than my dress because I felt like I made a mistake, and that’s when they started talking to me. They wound up vowing that this will never happen on our watch ever again. From then on, they were my No. 1 champions. So that’s how I learned to share when I screw up, and learned to say, “Hey, I’m having trouble with this.”

Q: Which “worst practice” do you want to expand on?

Impossible SLAs. Having reasonable service-level agreements is so important. When I work with enterprise clients, they already have tons of software that’s in production doing its thing, but they’re also building and updating new stuff. So I have two service-level agreements and one is the crap that was here when I got here and the other stuff is all the beautiful stuff we’re making now. So I’ll set up my tools so that you can have a low vulnerability, but if it’s medium or above, it’s not going to production if it’s new. But all the stuff that was there when I scanned for the first time, we’re going to do a slower service-level agreement. That way we can chip away at our technical debt. The first time I came up with parallel SLAs was when this team lead asked, “Am I going to get fired because we have a lot of technical debt, and it would literally take us a whole year just to do the updates from the little software compositiony thing you were doing.” “No one’s getting fired!” I said. So that’s how we came up with the parallel SLAs so we could pay legacy technical debt down slowly like a student loan versus handling new development like credit card debt that gets paid every single month. There’s no running a ticket on the credit card!

Q: What’s at the root of these “worst practices”?

One day when I was early in my career as a developer, a security person ran a VA scanner on my app. When I asked how do I fix these errors, he just said, “You should know.” I was panicked. How should I know? I’d had zero training on secure practices. They didn’t talk about this in college. No one ever gave me a book. None of my colleagues spoke about this. What do you mean, I should know? I learned that actually he had no idea what the answer was, and he felt insecure, so he did the “blame her and shame” so I wouldn’t ask again. Most people get into development because they want to build an amazing app that delights their customers, that does all the things they asked for, and they’re doing it the way they’ve been taught. What we need to do as an industry is start sharing from the very first lesson how to do things securely

AppSec Decoded Can Help

AppSec Decoded is a regular production of the Synopsys Software Integrity Group. You can access videos on security topics and interviews with key players in this space on its YouTube channel.

The post AppSec ‘Worst Practices’ with Tanya Janca  appeared first on The New Stack.

]]>
Overcoming the Kubernetes Skills Gap with ChatGPT Assistance https://thenewstack.io/overcoming-the-kubernetes-skills-gap-with-chatgpt-assistance/ Tue, 23 May 2023 18:00:45 +0000 https://thenewstack.io/?p=22708898

The challenge for Kubernetes (K8s) has been its complexity coupled with the lack of in-house skills to provision and troubleshoot

The post Overcoming the Kubernetes Skills Gap with ChatGPT Assistance appeared first on The New Stack.

]]>

The challenge for Kubernetes (K8s) has been its complexity coupled with the lack of in-house skills to provision and troubleshoot Kubernetes clusters in the cloud or on-premises. While K8s adoption is consistently growing, many developers and operations teams are still relatively new to the technology, having to learn new terms, workflows, tools and more.

K8s’ discrete parts require a broad level of expertise, even just in the setup process. Consider that spinning up a K8s cluster requires knowledge about and configuration of multiple components from pods to services, not to mention resources like etcd, API servers, kubelet and kube-proxy.

Then there’s scheduling, scaling and networking to contend with. One misstep can quickly translate into myriad issues with scalability, reliability, even security.

Further, the ecosystem itself is constantly rapidly growing and evolving. Tools and add-ons can be a lot for beginners, and hard to keep up with. Formal education and training around it is not necessarily part of every developers’ background, academically or professionally.

And we can’t forget that the technology, with many moving parts and complex interactions, can be difficult and time-consuming to troubleshoot when failures occur. Diagnosing the causes of the failures requires in-depth technical knowledge and expertise that often live inside a few experienced engineers’ heads.

Let’s dive deeper and explore new and innovative ways that can help overcome the glaring skills gap problem.

Upskill Your Team with ChatGPT Assistance

Kubernetes is challenging to learn and use effectively because there’s no one-size-fits all approach to it. K8s is highly customizable and can be configured in many different ways, depending on the specific needs of your application or infrastructure. It’s often difficult to apply what you’ve learned from documentation (and there is plenty) and training to an existing environment because teams lack contextual understanding and visibility of their architecture.

What does the current architecture look like? What pods are tied to a specific namespace? What is the health of the nodes? Asking basic questions of our environment requires context switching between the AWS console, the kubectl command line, Terraform configuration files and monitoring tools.

What if we could ask ChatGPT these questions?

Let’s look at an example of using PromptOps powered by ChatGPT to understand all the deployments within a cluster. PromptOps offers a free Kubernetes consulting tool that lets users ask questions and get immediate assistance in the form of BASH scripts, documentation references and other helpful resources.

By providing PromptOps fragmented data of our infrastructure from different sources, such as Confluence, Notion, Terraform config files and more, we expect PromptOps to be able to quickly aggregate all the information and help visualize the architecture. Instead of manually running kubectl commands to check the deployments, we prompted PromptOps over a chat to describe which deployments are present in our cluster.

Here is PromptOps’ response:

PromptOps > I suggest running this script to answer your question.

PromptOps gave a script to run along with a reference to a Notion page which contains information about the cluster.

Upon executing the script, PromptOps provided the output of the deployments within the cluster.

PromptOps also provided a visual graph of all the deployments in the cluster to make learning much easier.

The response from PromptOps simplifies Kubernetes infrastructure management, empowering everyone on the DevOps team to get up to speed on current architecture and keep up with changes.

Save Time with Automated Incident Investigation

Let’s look at troubleshooting production issues and see how PromptOps can help identify the root cause of failures in Kubernetes. Kubernetes metrics and logs offer critical insight across all levels from the cluster down to the container. They provide information on resource utilization, pod and node status, cluster autoscaler metrics, network errors, container runtimes and more.

But the issue with all these metrics is that they’re not immediately useful. A log stream or metric chart doesn’t provide the necessary context or direction. There are tools to help, but they often make things more complicated, requiring the pulling context across the whole tech stack just to see what really happened or who did what when issues arise.

At CtrlStack, we’ve been incorporating PromptOps to analyze causal graphs, Kubernetes events, metrics and logs to reduce the challenges of manual troubleshooting. The goal is to help DevOps teams quickly understand the Kubernetes issues by using PromptOps to generate root cause analysis summaries that explain the cause. In this example, a causal graph of the root cause analysis dashboard within CtrlStack is input to PromptOps for troubleshooting and resolution as described below.

Here is an example of a casual graph of the root cause analysis dashboard.

With this, we prompted PromptOps to explain the graph. Below is PromptOps’ response:

PromptOps > Here is the explanation of the graph:

  1. The graph is a visualization of the root cause of an error in a Kubernetes pod. The error is caused by changes in the pod’s image, which are similar to changes in the Kubernetes pod_error metric. The error is also similar to the inflection points found in the kubernetes_uptime and kubernetes_pod_mem_usage metrics, and to the topology changes in node1.
  1. The graph is showing the root cause of a kubernetes_pod_error change. There are three insights. The first is that the image has changed. The second is that there are inflection points found in the metrics kubernetes_uptime and kubernetes_pod_mem_usage. The third is that there is a topology change for node1.

The response from PromptOps provided helpful correlation of information across uptime and memory usage metrics as well as related topology changes. These insights involved checking the Kubernetes change events, metrics, resource usage, and topology changes. Based on this automated incident investigation, developers and operators should have sufficient context to quickly determine next steps to fix the problem.

Combining ChatGPT with Change Intelligence for Closing the Skills Gap

Based on the examples provided, it’s apparent that ChatGPT can provide valuable assistance to close the Kubernetes skills gap. ChatGPT offers fast insights and clear explanations for DevOps teams to troubleshoot and fix production issues. This empowers junior operators as well as developers who are new to Kubernetes to learn the technology and resolve common issues independently.

While ChatGPT’s responses can provide a quick understanding of the problem, it requires contextual information specific to different issues of Kubernetes deployments. That’s where change intelligence comes in. Change intelligence platforms provide the causal graphs that connect resource capacity, infrastructure changes, configuration changes, metrics history charts and an event timeline to optimize the path to root cause analysis.

A ChatGPT-based approach to learning Kubernetes has the potential to significantly improve DevOps productivity while eliminating cognitive overload. By combining ChatGPT with change intelligence, teams can uplevel their Kubernetes skills and gain better observability.

The post Overcoming the Kubernetes Skills Gap with ChatGPT Assistance appeared first on The New Stack.

]]>
Cloud Native Skill Gaps are Killing Your Gains  https://thenewstack.io/cloud-native-skill-gaps-are-killing-your-gains/ Mon, 22 May 2023 14:15:29 +0000 https://thenewstack.io/?p=22708733

Of the many business challenges that organizations face today, I’d argue that skill gaps are one of the most underestimated.

The post Cloud Native Skill Gaps are Killing Your Gains  appeared first on The New Stack.

]]>

Of the many business challenges that organizations face today, I’d argue that skill gaps are one of the most underestimated. Many organizations don’t realize the impact, but the research is staggering: 90% of organizations will be affected by digital transformation IT skill shortages, which will cost more than $6.5 trillion globally by 2025, according to IDC.

When it comes to modern technology like cloud native computing, the impact of skill gaps becomes even more apparent. We all know that Kubernetes and containers enable us to move faster and ultimately make more money, but do we have enough people power to manage that?

According to the Cloud Native Computing Foundation’s 2022 Annual Survey, 44% of respondents that have yet to deploy containers in production said that lack of training is the most significant barrier inhibiting adoption. Even though containers are beginning to be considered the “new normal,” the hiring landscape simply doesn’t reflect the level of demand.

Why It’s Hard to Hire Cloud Native Experts

We’ve identified that there’s a cloud native skill gap, but for the people who actually are cloud native experts, competition for their talent is fierce. Here’s why:

  • Highly skilled engineers know their worth — Expect to pay up. Engineers who are experts in cloud native technology know that their skill sets are in high demand and have countless organizations vying for their talent. As a result of their expertise, these engineers can command industry-leading compensation packages and work agreements. The Googles of the world might not have a problem attracting these types of engineers, but not every company can offer lavish compensation packages featuring six-figure salaries, stock plans, catered meals and other desirable perks.
  • Difficult economic headwinds — We’ve all heard about the recent downturn in economic activity; companies are trying to tighten up while remaining competitive. Like I said, highly skilled engineers are expensive, and it can be difficult to keep pace from a hiring perspective in light of these difficult economic headwinds. As a result, many organizations are looking to alternative sources of talent, such as recent graduates and remote workers, to fill the gaps in their engineering departments. However, these sources of talent might not have much real-world experience when it comes to cloud native expertise.
  • Rapidly changing technology — Modern technology moves FAST! Kubernetes and containers weren’t a thing until the last decade, so it’s hard to find folks who are truly experts at it. Recruiters often rely on certifications to assess capability, but a certification alone doesn’t necessarily indicate a person’s organizational experience. In addition, university programs provide only basic development skills, resulting in a lack of cloud native technology training for recent grads.

The Solution: Skill up Your People!

So, what’s the solution? Clearly, hiring people who are currently cloud native experts is difficult, ultra-competitive and expensive.

However, look inside your own organization and you’ll likely find people who are highly capable, intelligent and driven. Consider skilling up your existing employee base — your greatest asset — through robust training and certification programs. The benefits of providing a training and certification program to your employees are expansive:

  • Competitive advantage — Training your people so they’re well-equipped with the skills necessary to keep your cloud native technology running will increase your agility, scalability and even reduce costs. You’ll develop an immediate competitive advantage in comparison to organizations that haven’t made that same investment in their organizations.
  • Consistency and security — By training your people organically through a modern training and certification program, you’ll experience improved consistency across your teams. This consistency can help accelerate your organization’s speed in deploying, adopting and maintaining cloud native technology. In addition, this consistency can help bolster your security posture, as everyone will be following the same frameworks and standards.
  • Career advancement — Training and certifying your people leads to career advancement within your organization. By offering opportunities to grow and develop internally, you can encourage mastery in cloud native technology within your employee base. In fact, in our experience with customers, we’ve even witnessed organizations skill up folks who were originally in nontechnical roles and who are now superstars in cloud native technology!
  • Talent retention — Talent retention and career advancement go hand in hand. When your people know that they can advance in their careers and therefore increase their earnings within your organization, they’re compelled to stay longer. In fact, according to the American Upskilling Study, 61% of respondents said that upskilling opportunities are an important reason to stay at their job.

Through engagements between our consulting division, VMware Tanzu Labs (formerly Pivotal Labs) and organizations of all types across industries undergoing technological and cultural transformation, we see firsthand how critical learning and development are to their success. We believe cloud native skill gaps are a challenge worth conquering.

Consider skilling up your workforce in order to face this world of modern development, and you’ll be amazed at the outcomes your organization can accomplish. To learn more about the impact of skilling up your people for cloud native technology, check out the webinar “Unlocking the Full Potential of Cloud Native Technology with Education & Training.”

To start supporting your platform and application business goals today, check out freely available training programs:

Spring Academy — A comprehensive, project-based learning platform providing development projects that get you using Spring in real-world development scenarios and can help you prepare for the official Spring Certified Professional exam.

Kube Academy — A resource with dozens of courses led by subject-matter experts that offers hands-on practice with Kubernetes.

Tanzu Academy — An on-demand learning hub for platform and app operators to become experts at achieving desired outcomes with VMware Tanzu products, featuring highly curated material, such as tutorials, guides, hands-on labs infrastructure, and even a certification course that’s recognized across the industry.

Whether it’s enhancing platform and app engineering processes, optimizing workflows, or ultimately making your organization move more quickly, courses such as those offered in these active learning hubs can help equip your people with the skills they need to make your cloud native strategy successful.

The post Cloud Native Skill Gaps are Killing Your Gains  appeared first on The New Stack.

]]>
Economists Show AI Bringing Positive Impact to Workplaces https://thenewstack.io/economists-show-ai-bringing-positive-impact-to-workplaces/ Sun, 21 May 2023 13:00:46 +0000 https://thenewstack.io/?p=22708107

Stanford-based economist Erik Brynjolfsson is surprisingly cheery about AI’s ultimate impact. “I wouldn’t be surprised 50 years from now, people

The post Economists Show AI Bringing Positive Impact to Workplaces appeared first on The New Stack.

]]>

Stanford-based economist Erik Brynjolfsson is surprisingly cheery about AI’s ultimate impact. “I wouldn’t be surprised 50 years from now, people looked back and say, wow, that was a really seminal set of inventions that happened in the early 2020s…” he told CBS in January. “I think we’re going to have potentially the best decade of the flourishing of creativity that we’ve ever had, because a whole bunch of people, lots more people than before, are going to be able to contribute to our collective art and science.”

But is there evidence to support this optimism? Last month Brynjolfsson teamed up with MIT-based economists Danielle Li and Lindsey R. Raymond, for a new paper to explore the question. Titled “Generative AI at Work,” it begins by noting the lack of other studies on real-world economic effects of AI, calling their work “to our knowledge, the first study of the impact of generative AI when deployed at scale in the workplace” — and over a longer period of time.

So what did they find? The three economists conclude AI “increases worker productivity, improves customer sentiment, and is associated with reductions in employee turnover.”

And the specifics were even more intriguing…

Cyborg Customer Service

Their study focused on chat-based customer service. Among businesses using AI, 22% were using it in their customer service centers, according to a McKinsey Analytics study cited by the researchers.

Though the AI tool isn’t specified, their paper specifies its functionality: it provides support agents with real-time suggestions for responses, and also prompts them with links to internal technical documentation. And of course, it tracks entire conversations for context.

Importantly, the system doesn’t make suggestions at all if it doesn’t have enough training data — which “occurs in a large minority of cases” — and human agents always have the choice to disregard all suggestions.

A screenshot from 2023 NBER 'Generative AI at Work' paper

The conversations happened between November of 2020 and February of 2021, using a tool built on large-language models from OpenAI’s GPT family, “with additional ML algorithms specifically fine-tuned to focus on customer service interactions.”

Interestingly, its training data included conversations with both “successful” and “unsuccessful” outcomes (as well as conversations of varying length) — and indicates whether the data came from one of the firm’s top-ranked agents.

“The AI firm then uses these data to look for conversational patterns that are most predictive of call resolution and handle time,” the paper notes, adding it prioritizes “responses that express empathy, surface appropriate technical documentation, and limit unprofessional language.”

The randomly-assigned conversations “are relatively lengthy, averaging 40 minutes,” according to the report, “with much of the conversation spent trying to diagnose the underlying technical problem…”

Past automation saw a rarefied handful of engineers carefully mapping tasks onto algorithms — versus this cruder brute-force method of feeding masses of training data into an ML system until an algorithm emerges that everyone can follow.

“Equipped with this new AI system, the company’s customer support representatives are now basically part human, part intelligent machine,” quipped NPR’s Planet Money blog. “Cyborg customer reps, if you will.”

And after 3,007,501 conversations (performed by 5,179 agents) the results came in. Where before the support agents averaged 43 minutes for each call, that average dropped to just 35 minutes when AI suggestions were made available (over 1,180,446 calls).

But that’s just the beginning…

  • The AI suggestions also increased the average number of resolved issues by 13.8% (per hour) — fueled partly by an increase in the number of simultaneous chats that an agent can handle. The researchers write that their evidence suggests that “AI enables agents to both speed up chats and to multitask more effectively.”
  • The introduction of AI tools apparently also improved the retention of employees — in every skill group. And in a possibly-related development, the tools also improved the way customers treated their support agents, indicating, among things, in fewer requests for a manager to intervene. The researchers see changes in the very experience of work, concluding that “generative AI working alongside humans can have a significant positive impact.” As it stands, the annual turnover rates for customer service agents can reach 60%, according to one study they cite — leading to costly training for replacement workers and wider variations in experience levels and productivity.
  • AI’s impact varied depending on a worker’s experience and skill level, “with the greatest impact on novice and low-skilled workers, and minimal impact on experienced and highly skilled workers.” In fact, for the highest-skilled workers, the AI help didn’t lower the average time spent on calls at all (though these agents may have been able to handle more calls simultaneously). But even more interesting, the highest-skilled workers saw “small but statistically significant decreases in resolution rates and customer satisfaction,” the researchers note, positing that AI assistance “may distract the highest-skilled workers, who are already doing their jobs effectively.” (While meanwhile, it’s the less-skilled agents who “consistently see the largest gains.”)

At the same time, there’s evidence that AI “disseminates” the “potentially tacit knowledge of more able workers.” That is, the benefits of experience get passed along to newer workers. The researchers provide specific benchmarks. Among agents given access to the AI, those agents with two months of tenure “perform just as well as untreated agents with over six months of tenure.”And the workers with AI assistance also seemed to get better faster.

Disseminating Productivity

Their paper cites a concept known as Polanyi’s paradox — that much of what we know how to do is hard to articulate into rules — and is considered a potential roadblock for full automation.

But AI appears to have the ability to acquire even those unexplained skills. Their paper argues the systems are “capturing and disseminating the patterns of behavior that characterize the most productive agents,” and the researchers saw gains in every measure of productivity — including speed, success rate, and customer satisfaction.

 Some productivity measures - screenshot from NBER 'Generative AI at Work' paper (2023)

“[W]hat this system did was it took people with just two months of experience and had them performing at the level of people with six months of experience,” Brynjolfsson told Planet Money. “So it got them up the learning curve a lot faster — and that led to very positive benefits for the company.”

The report goes so far as to ask whether top-performing workers should be paid more — since their abilities are now propagated throughout the workforce.

But more importantly, the researchers note specifically that this is “in contrast to studies of prior waves of computerization.” Past automation saw a rarefied handful of engineers carefully mapping tasks onto algorithms — versus this cruder brute-force method of feeding masses of training data into an ML system until an algorithm emerges that everyone can follow.

Planet Money even asked Brynjolfsson if AI “could also reduce inequality by bringing the top and middle down, essentially de-skilling a whole range of occupations, making them easier for anyone to do and thus lowering their wage premium.” They report that Brynjolfsson “seemed a bit skeptical of this” — but they also see another important benefit. “It suggests that AI could benefit those who were left behind in the previous technological era.”

The paper clarifies at one point that their research wasn’t capturing the “wage effects” of AI. And it’s certainly true that AI may grow the economy, Brynjolfsson told Planet Money, using the classic metaphor of a bigger pie. But then he issued a warning. “It’s very clear that it’s not automatic that the bigger pie is evenly shared by everybody… We have to put in place policies, whether it’s in tax policy or the strategy of companies like this one, which make sure the gains are more widely shared.”

The paper’s conclusion concedes that “the effects we find may not generalize across all firms and production processes.”

Just for example, in fast-changing environments, it may be harder to train systems using historical data. But then it also acknowledges many other questions that it’s also leaving unexplored. Will customer service agents be reassigned to “more complex customer responsibilities, increasing aggregate demand”? Will customers prefer and even demand AI-enhanced support? Will AI systems uncover “patterns and insights” that change how workers are managed or how knowledge is shared?

And will wages go up or down?

There’s also the emerging issue of whether AI is passing off someone else’s skills as its own. “[O]ur findings raise questions about whether and how workers should be compensated for the data that they provide to AI systems,” the researchers write. “High-skill workers, in particular, play an important role in model development but see smaller direct benefits in terms of improving their own productivity.

“Given the early stage of generative AI, these and other questions deserve further scrutiny.”

The post Economists Show AI Bringing Positive Impact to Workplaces appeared first on The New Stack.

]]>
Tech Works: Why Burnout and Layoffs Hit Some People Harder https://thenewstack.io/tech-works-why-burnout-and-layoffs-hit-some-people-harder/ Fri, 19 May 2023 16:00:22 +0000 https://thenewstack.io/?p=22708458

Nearly three years ago, after the murder of George Floyd sparked worldwide protests against police violence and structural racism, more

The post Tech Works: Why Burnout and Layoffs Hit Some People Harder appeared first on The New Stack.

]]>

Nearly three years ago, after the murder of George Floyd sparked worldwide protests against police violence and structural racism, more than 200 tech companies pledged to confront racial inequity within their own organizations.

Now, many of those same companies are rolling back their diversity efforts — and that’s bad news for any member of a marginalized group who’s already working in tech.

“The so-called racial reckoning in the tech industry has taken a backseat to newer trends. Slow revenue growth, stock market falls, and rising interest rates have burst the tech bubble. And early pandemic over-hiring has resulted in mass job cuts in the sector,” Ebony Flake wrote for Essence in December.

Granted, the “racial reckoning” of the tech industry was mostly performative; a January report by the Business of Tech podcast continued to show that white men make up 89% of IT leadership.

But where does this leave people who are in the minority and/or marginalized in the tech industry, people who are women, people of color, immigrants, over 40, LGBTQ+, disabled or neurodivergent?

In short, in makes them more likely to burn out — and much more likely to leave the industry they fought harder to join.

So, as circumstances are unfortunately not likely to improve in 2023, how can those most minoritized by the tech industry fight what I’m calling “intersectional burnout”? How do layoffs affect marginalized people differently? Read on to learn how to help yourself and your colleagues in tech.

What Is ‘Intersectional Burnout’?

“Intersectional” means the ways in which social categorizations — including race, gender, class, age, disability, neurodivergence, and sexual identities — interconnect and influence each other.  Intersectional burnout, then, is the particular vulnerability to exhaustion, detachment and imposter syndrome experienced by those who are not in the majority in an organization.

This experience has people “trying to outrun the stress response. So they’re working harder, but that’s not enough,” Monica Rose, a burnout prevention coach specializing in women in the workplace, told The New Stack. “You’re burning out as you try to outrun your emotions.”

The long-standing effects of intersectional burnout on those marginalized in any organization, Rose said, comes down to psychological safety and company culture. Really, she said, it centers on answering one question: “Am I safe to turn up to work as me?”

When the answer is “no,” it often comes down to every-day microaggressions made by the majority:

  • “Where are you from?”
  • “That’s interesting.” (When looking at people’s lunches).
  • Comments on hairstyles
  • Talking to a colleague about  “your people.”
  • “Don’t get too emotional.”
  • Misgendering
  • Mispronouncing and misspelling names. (Nobody messes up “Arnold Schwarzenegger” or “Timothée Chalamet.”)
  • “Who’s the decision maker?” “You can’t be an engineer!” “You’re in marketing or DevRel, right?” “Can I talk to someone more technical/senior?” (All are commonly heard by women working booths at tech events.)

This overall “othering” forces teammates to code-switch or mask to fit within colleagues’ “norms.” This isn’t often out of malicious intent, Rose said, but rather a malicious habit built on unconscious bias.

Add to this the burden of playing the “official representative” of your particular underrepresented demographics. Over the years, dozens of women and especially women of color I’ve interviewed have referenced this isolating experience of being the only person in the room or often in the whole engineering department.

While tech’s majority is dealing with burnout by quiet quitting, those in marginalized groups tend to actually work harder. And in the current time of short staffing, Rose said, that translates to working past capacity and going on call more, in order to “keep up and go beyond,” lest you are assumed lazy.

Basically, tech is moving fast and breaking people.

“High levels of burnout across intersectional identities within the organization is a sure indication of of the prevalence of a toxic workplace environment,” Rose continued, which “is defined by behaviors such as bullying, harassment and discrimination that goes unaddressed or is absorbed into the company culture as ‘the norm’.”

Recognizing and Overcoming Intersectional Burnout

Just like the stages of grief, addressing intersectional burnout begins with recognition. Rose draws on the burnout research of Christina Maslach in order to flag some signs of workplace burnout, including:

  • Withdrawal, not engaging in meetings
  • Chronic stress.
  • Overworking, presenteeism and not setting boundaries.
  • Needing to take more time off, and not for fun.
  • Negative self-appraisal.
  • Feeling unsafe at work or forced to change your behavior.
  • Cynicism, including the judgment of other people or the organization, complaining about the level, quality or conditions of work.
  • Physical reactions, including fatigue and headaches.

“Most often people don’t realize that it’s happening,” Rose said. “If you come over with a feeling like ‘These dudes don’t get me. I don’t belong,’ you start to buy into that narrative and start to look for more of those signs. You’re almost looking for reasons to be stressed, and you will find them.”

In reaction to burnout, she said, you often stop looking for allies and support networks to help. “Don’t discount that what you’re feeling might be true,” Rose said. “But, if that is true, what are you going to do about it for your own sake?”

This is especially of concern during times of layoffs. Whether your role is steady or not, she said, you have to make sure, if considering a move, that you won’t just jump into the same organization with a different name: “You know the questions you should be asking when you interview for a new job.”

Take time off to rest, she advised, and then reflect on whether you should stay or go. Specifically, Rose recommended physical activity so that you can break the stress cycle that has your mind and body in fight-or-flight mode.

If this sounds very similar to autistic burnout and other challenges facing neurodivergent people, that makes sense, as both forms of burnout are exacerbated by the strain of masking or code-switching and feeling gaslit when efforts to ameliorate circumstances are ignored.

Are Tech’s DEI Efforts over?

Not to be a pessimist, but this level of isolation and burnout in tech is likely to get worse, especially for those who are marginalized by the industry.

Indeed women are already among the hardest hit, making up 47% of tech layoffs, while making up less than a quarter of the industry. This is in part because women are disproportionately in the customer-facing and employee-facing roles — like marketing, sales, and human resources — which are first on the chopping block.

And this is after the tech careers of women and especially women of color were far more negatively impacted by the pandemic than those of men.

While it’s always easier to track demographics on the binary, there’s no reason to believe this trend won’t extend to all under-represented groups in tech. Those most at risk for intersectional burnout will likely feel the most pressure over the upcoming months.

Making things worse, diversity, equity and inclusion (DEI) advocacy roles, which champion diverse hiring and retention practices, have suffered a  disproportionate impact from the tech layoffs, as found by research released in February by Revelio Labs.

“Companies were ‘into’ diversity as a trend,” all the way through 2021, Veni Kunche, founder of Diversify Tech, a job board and talent directory, told The New Stack.

But at the start of 2022, about six months before most signs of economic downturn emerged, she noticed a drop in demand for the services of her company, which focuses on talent underrepresented in the industry.

“Diversity is not a trend anymore,” Kunche said. “I expected it to go low, but I didn’t expect it to be a full 180 so fast.”

In my own network, I’ve noticed cuts in DEI and accessibility roles; such cuts have been boldly broadcast by companies like Basecamp, Coinbase and Twitter, with seemingly little concern about looking bad about it. “White men are really publicly declaring, ‘we’re not going to talk about politics’,” Kunche said.

Over the last six months, Amazon and Nike — with more than 800,000 and 80,000 employees globally, respectively — have also dramatically shaved down their DEI teams to the single digits, without the same fanfare. Kunche has also seen more companies quietly taking down their diversity pledges from their websites.

“We were making some progress. And then, all of a sudden, it felt like these high-profile CEOs were saying it doesn’t matter. And while some don’t publicly say it, they are absorbing it,” she said, reflecting not only on what she’s witnessed publicly but in private communities, like Slack groups for startup founders.

Now, she added, “People are saying, ‘We focused too much on this’ even though they didn’t really. It’s back to square one: ‘We are just going to hire who we get along with.’  The focus is not on DEI anymore, they’re not even trying to be cautious of it.”

This will have a harrowing impact on diversity in tech because the Revelio report found that, since 2020, companies with DEI teams have seen a measurable increase in non-white applicants, and that the presence of DEI teams has had a direct correlation to boosted company morale — which at time of layoffs, anything to boost worker morale should be prioritized.

The Revelio report also noted that DEI teams typically have a higher representation of Black, Hispanic and Asian employees than companies as a whole, which means further slicing off of marginalized employees.

“As usual, we will be forgotten,” Kunche said. But she’s not giving up so easily. She has recently added a no-charge, public list of underrepresented talent that have recently been laid off.

How Layoffs Hit Differently

The trauma that causes intersectional burnout also harms your ability to respond to change, meaning tech layoffs are not hitting everyone the same way.

“The one big thing is that layoffs happen, but to be rehired, that is what hits marginalized folks the most. We are not let into those networks to have referrals,” Kunche said.

Since marginalized people in tech have had to work harder to get where they are, they are more likely to remain at companies longer, she said, rather than having to deal with new aggressions at a new workplace. This in turn means, “you didn’t build that network or weren’t even allowed to.”

Whether you’re laid off or not, she recommended starting networking now. LinkedIn is not optional, so keep that profile updated, connecting with recruiters and hiring managers — not just following them — so you show up higher in their results.

From there, Kunche suggests looking for safe spaces developed for these marginalized groups, including:

Introduce yourself and keep an eye on the job channels in these communities.

Strategies for Surviving a Layoff

If you are laid off, Kunche echoed Rose’s advice: take a break. Also, reevaluate if your career is really headed where you want it to go, if you are happy, and if you need to take a different path.

She also warned against jumping at the first opportunity. Instead, be intentional. Google any potential employer, look for signs of layoffs, signals for funding, and any pending court cases. LinkedIn is also a great tool, she says, for getting a general idea of who works there — are all engineers white men?

Check out Glassdoor, as well as InHerSight, which features company reviews and career advice especially by and for women. The previous Slack communities are also great to ask for more honest opinions of a potential employer.

Kunche has gotten valuable intel that way. She told of one sponsor she rejected for Diversify Tech because she learned that there was a whole alumni Slack community to support women and queer people who were mistreated at that company. A few months later, news came out that the company was getting sued for persistent harassment.

“The worst thing a lot of people are doing is they aren’t mentioning that they are laid off or they are looking for an opportunity,” she said, because you can’t offer help to people if you don’t know they need it.

“People think if they are laid off it will give the impression they aren’t good at a job,” but she added that we are way past that blame game as an industry. Once you let people know your situation, she said, “your network can keep you in mind.”

Check out this episode of The New Stack Makers for more on recognizing, recovering from and preventing burnout:


Are you in a minoritized group in the tech world, but work for a great organization? Share your positive experience with @TheNewStack and @JKRiggins so folks know where they should be applying.

The post Tech Works: Why Burnout and Layoffs Hit Some People Harder appeared first on The New Stack.

]]>
A11y: GitHub Brings Accessibility to 85% of Open Source https://thenewstack.io/a11y-github-brings-accessibility-to-85-of-open-source/ Thu, 18 May 2023 14:25:16 +0000 https://thenewstack.io/?p=22708396

With 1.3 billion disabled people in the world, they are not edge cases. Yet, while there’s an effort and even

The post A11y: GitHub Brings Accessibility to 85% of Open Source appeared first on The New Stack.

]]>

With 1.3 billion disabled people in the world, they are not edge cases. Yet, while there’s an effort and even requirements to embrace accessibility in the end-user experience, it remains incredibly rare to see accessibility guidelines for the developer experience. And, without accessibility — or a11y in shorthand — built into the software development lifecycle, 16% of the world’s population are kept from participating in the creation of our future.

GitHub — home to 85% of open source communities — is in a unique position to change that. Which is just what it is doing in recognition of today, Global Accessibility Awareness Day (GAAD).

The New Stack interviewed Head of Accessibility Ed Summers to talk about GitHub’s updates toward an accessible developer experience at a global scale, and what next the tech industry and the world have to do to make sure everyone are able to build technology.

‘Nothing about Us, without Us.’

GitHub’s accessibility effort fall within two buckets:

  • Increase cultural acceptance of disabilities — which they are doing via highlighting the voices of disabled developers in the ReadME Project
  • Make developer experience inclusive and available — lowering socio-technical barriers to entry

Much of our discussion focused on the latter.

“Technology is both ubiquitous and absolutely required, and access of it is required. So how do we make technology more accessible for 1.3 billion people out of the gate, and also create better, innovative access technologies?” asked Summers, laying out the foundation of GitHub’s approach to accessibility.

It’s also what attracted him to joining the company nine months ago. In fact, he wasn’t looking for a new gig, but when he was headhunted, and he was so intrigued by GitHub’s job ad that he went on the interview just to meet its author. Really, it came down to a line at the end of the description: “People with lived experience are encouraged to apply.” As a blind software engineer and accessibility specialist, he understands the specific struggles associated with keyboard-only navigation, screenreaders and more.

This verbiage in a job ad showed that GitHub understood that the success of this endeavor is defined by making sure “that people with disabilities are participating in — or leading — those efforts to build the technologies that are used by all humanity. And if the relatively small group of people who build the technologies that we all use are not representative of all populations and excluding people with disabilities, then people will be excluded,” Summers said.

So the overarching goal is to empower people with disabilities to create, collaborate and contribute to that future. In other words, “Nothing about us, without us.”

Updates to GitHub’s Cross-Platform Accessibility

One of the most exciting outcomes of the next stage of advancement in mobile, the Internet of Things and AI is increased inclusion via better accessibility. Because technology is essential for people with disabilities to participate in the world. And technology is becoming a human right that no one should be excluded from.

With this in mind, Summers says, GitHub has reached an inflection point, creating a “unified, strategic, organized effort,” with significant improvements recently released across the platform. “It takes time to build the systems and other elements of culture in order to make that progress,” he said, but continuous momentum — with the right systems and culture in place — is essential to tackling the challenge.

They are continuously testing the accessibility of the whole website, as there’s always room for improvement.

Now, an overview of recent cross-platform improvements to GitHub’s accessibility:

All of these changes have been implemented across hundreds of pages across the Primer, GitHub’s open source design system, which includes large components of reusable code. They’ve also put in place automation to prevent regressions in the future and incorporated it into the CI/CD process, “shipping that left,” Summers explained.

Speaking animatedly of the new code search tool, he said, “It’s just really improved my productivity, [Searching code] is just something I do all the time. I need to go find something and it’s just so easy to go to use that search, of course, with a screen reader.”

In fact, Summer listens to screenreaders at a rate of about 600 words per minute — more than double the average sighted reader — so anything that can increase that, dramatically improves his productivity, even potentially giving him an edge over sighted coders. When accessibility support is there, he says, it’s not unusual that disabled people surpass abled people.

“We are forced to adapt to our assistive technologies and kind of push it to the limit,” Summers said. He then shared the incredible example of Paul Chiou, a developer who is paralyzed from the neck down, and who, in pursuing his doctorate, is designing and building custom hardware and software, as well as revolutionary accessibility automation tooling.

“Paul was having some real problems with accessibility within a game and he started tinkering and experimenting, and he got to the point where he had like 12 different commands that he could use within the game, which was more than a fully-abled developer would have on their keyboard,” Summers said. “It’s a great example where disability was a catalyst for creativity, and that creativity led to capabilities that are beyond the norm.”

It Is a Pipeline Issue

While computer science is a degree that attracts more disabled people on average, overall, people with disabilities have a lower rate of graduation from high school and a far lower rate of graduation from university and beyond. “Access to education as a whole can be a real struggle,” Summers said, and the tech pipeline still relies heavily on the often artificial barrier of university degrees.

He offered some “bright spots” or resources that are helping make tech education more accessible:

  • Access Computing – out of the University of Washington, along with the DO-IT project.
  • Make4All – has projects including making physical computing more accessible.
  • CS4ALL – for primary and secondary school students.
  • Code.org – for same age.
  • Paths to Technology – out of the Perkins School for the Blind, this helps teachers to adapt to ever-changing accessibility technology and adapt technological advances for disabled students.

Then, of course, once they get into university, Summers said there are other hurdles, including the rest of STEM study, like math and physics, that are required and often not technologically inclusive at all.

On top of all this, tech workers need to pursue life-long learning to stay in the game. That is mostly done via online materials.

FreeCodeCamp.org has invested in accessibility, and targets the parts of the world without high-speed internet access. Wikipedia is another open source project that has achieved global accessibility of education through low bandwidth and, while reliant on a partially voluntary open source workforce, has very detailed accessibility standards.

Understanding that education is another hurdle for this community, the GitHub Global Campus, already written in more accessible HTML, has recently shipped accessibility fixes to improve the way HTML is written, including:

  • Reviewing headings and content hierarchy for skim-ability.
  • Making sure linkable things are links and clickable things are buttons.
  • Including more semantic tags for easier interaction.

What Companies Can Do GAAD and Every Day

“It’s a great time to hire people with accessibility expertise,” Summers promises, even with or because of continued tech layoffs. Especially “for companies who are interested in making the world a better place and increasing representation.”

By no means does he think GitHub’s work is done. “We’re shipping to learn,” he said because accessibility is hard to get perfect, but “it’s relatively easy to tell if there’s a positive intent and effort being made,” like with actively encouraging people with disabilities to apply to join your teams.

And while no one has it perfect, Summers encourages other organizations to also share their progress. Because even GitHub isn’t doing it perfectly. “We ship accessibility improvements that we know are flawed, that may be flawed, or missing, or not completely what we want them to be, but we can get them out the door and get them into the hands of developers now,” he said because disabled developers need these features now. And what better way to learn than to get them using it.

“I’ve never been more optimistic about the possibilities for people with disabilities — it just keeps getting better and better,” Summers said. “And I think technology is a large part of that, but also, it’s just the widespread acceptance of people with disabilities.”

And at the scale of GitHub, the impact for an accessible developer experience can be tenfold. Or, as Summers put it:

“We’re creating a platform that anybody can build on. All are welcome. Let’s see what we can do.”

How is your tech company concentrating on increasing accessibility in your developer experience? On recruiting disabled developers? We want to here! @TheNewStack and @JKRiggins.

The post A11y: GitHub Brings Accessibility to 85% of Open Source appeared first on The New Stack.

]]>
Avoiding Uncertainty in Uncertain Times https://thenewstack.io/avoiding-uncertainty-in-uncertain-times/ Mon, 15 May 2023 17:17:37 +0000 https://thenewstack.io/?p=22707612

Tech layoffs. Just seeing those words can make most of us in the industry suffer a painful twinge. And so

The post Avoiding Uncertainty in Uncertain Times appeared first on The New Stack.

]]>

Tech layoffs. Just seeing those words can make most of us in the industry suffer a painful twinge. And so far 2023 has been “twinge-full,” to say the least. According to TechCrunch, tens of thousands of tech workers have already been laid off this year.

The world is always full of uncertainty, but that seems to be especially true today for those whose livelihood is tied to technology. For those of us who manage teams, this “shifting sand” environment makes our role even more important and all the more challenging.

Uncertainty can have a detrimental effect on the people we supervise. They may experience stress and anxiety and feel less motivated, especially when the decisions that affect them become potentially more detrimental or the uncertainty continues to linger unresolved indefinitely. We often see the pattern “hurry up and wait” over and over again as a company tries to adjust to its changing circumstances.

As a manager, I can attest that uncertainty can negatively affect team dynamics and performance. When uncertainty spikes, chatter and rumors fly between team members who are trying to get a better grasp on the situation. Motivation wanes when objectives are not clear and the company goals seem like a moving target. As anxiety rises, performance falls. A few people may be able to capitalize on uncertain times to grow and seize opportunities that might not be available to them otherwise, which can be a positive thing, but those changing dynamics in and of themselves may further increase the chatter and unease among the more risk-averse members of the team.

Transparency Assuages Fear of the Unknown

The role of a manager is to lead the way through the turbulent times, offering as much transparency as possible and painting a common vision on which the team can focus. As a leader, you are responsible for your teams’ well-being and that includes letting them know where they and the company stand. When most conversations occur out in the open and when team members feel free to raise concerns, there is less room for “under the surface” chatter, confidence in leadership is reinforced, good ideas can be elevated and the team’s efforts can remain better aligned with the most pressing business objectives, all of which help propel the company forward.

It’s never easy to deliver bad news, but difficult conversations are made easier if you have been transparent and empathetic all along. Receiving bad news as a surprise is far worse. Being aware of matters as they unravel and feeling confident in leadership’s best intentions make all the difference in the world when the bad news finally comes.

That being said, it’s only fair to note that transparency doesn’t come without risk. Some people will be tempted to “cut bait and run” when the going gets tough. A manager has to expect that and react appropriately: Handle exits with understanding and send off departing teammates in the best way to their next adventure, all the while openly communicating to the rest of the team. You might lose some good people along the way, but the remaining team ultimately will be closer knit and stronger for it. Plus, if you do it right, those who leave will serve as your company’s ambassadors and will help drive business and new hires your way.

When I was a young manager, working at Delver, a social search company, it was running out of money after failing to secure its next round of funding after the 2008 financial crisis. The CEO of the company told the employees the situation with tears in his eyes, and in addition to bringing in human resource companies to help employees secure their next gig, he did everything in his power to make sure all employees were taken care of. A month and a half later, the story had a happy ending: The company was acquired by Sears, and almost all employees continued with the acquisition (including myself), primarily due to the trust we had in the CEO because of his sincere efforts and his authenticity.

If you’re a team leader striving to help your teams succeed as the headlines barrage us with bad news, here are six things you can do to minimize the negative impacts of uncertainty:

1. Be Positive

When there is turbulence in an aircraft, people look to the crew to see how they are reacting, and the same goes for leaders. In times of uncertainty, your team will be watching you closely. Be transparent and honest, but also project positivity and confidence in any way you can. Your best hope of surviving a crisis is when everyone steels their resolve and works together. As a team leader, your behavior sets the tone for the team.

2. Focus on What You Can Control

Let’s face it: When markets crash and economies collapse, there is not much you can do to change the situation, and thinking about it constantly will not do any good. Instead, focus your thoughts and efforts on your product and delivery, and encourage your team to do so as well. Remember, your reputation is your biggest asset. If the story doesn’t end well (which it sometimes doesn’t), having your co-workers remember you as a person who worked hard until the end will help you land your future opportunities. Moreover, focusing on the here and now is a great way to keep yourself happy. Once it’s all over, you don’t want to live with the regret of wishing you would have done things differently. Focusing on doing your best with the things you can control is the best prophylactic for regret.

3. Be Flexible and Agile 

In uncertain times, the ability to react quickly as things change can make all the difference. Your responsibility as a leader is to build a strong and robust team that can work well in short iterations. Put a process in place that allows the team to quickly change direction and deliver new value as often as needed. More often than ever before, you need to frequently reexamine your priorities, review the changes in the industry and reassess — and mitigate — the risks. Every step of the way, make transparency paramount and communicate continuously with your team.

4. Focus on Product

In a strange paradox, times of crisis might provide development teams with a rare opportunity to focus on your product and make it better. Iterate and improve the value you can provide to your customers. This will pay off in the long term.

5. Forge a Strong Team

In times of uncertainty, you’ll never go wrong by investing in your team. Many of the startups that I have worked for pivoted — some even pivoted multiple times — and most of them ended up successful. I am convinced that the specific problem these companies set out to solve or the market they chose to compete in wasn’t nearly as critical a factor in their ultimate success as the team they built.

If you’re lucky enough to be able to hire during times of uncertainty, focus on hiring strong and versatile people who will give the team as much flexibility as possible to navigate through change. If you are reducing your team’s size, make sure you keep those who are strong and versatile, even if they are working on less prioritized projects. Not only will this boost morale by making such decisions seem less arbitrary, but also this strategy will ensure you will be well positioned after the storm has passed.

6. Be Resilient

If you already have a “no blame” culture in which people feel they can take big risks, even if they don’t end well, congratulations! You are already well on your way to having a resilient team. Your team is already conditioned to know that not everything that you try will work — not at the project level, and not even at the company level. Practice getting up when you fall down: Continue to emphasize the learnings achieved from failures, both at the individual and team levels. Throw-away work is less painful if you make a point to recognize how it was helpful in providing learnings and improved skills and processes. Drawing upon and reinforcing this resilient, no-blame culture becomes even more important during times of uncertainty.

Uncertain times, like those we are facing in the tech industry in 2023, are an opportunity for leaders and team managers to reveal the best of themselves. Above all, our teams are looking to us to model the attitudes and behaviors that refine ore into steel. As George Washington, a leader who faced seemingly insurmountable challenges once said, “Perseverance and spirit have done wonders in all ages.”

The post Avoiding Uncertainty in Uncertain Times appeared first on The New Stack.

]]>
Developers Put AI Bots to the Test of Writing Code https://thenewstack.io/developers-put-ai-bots-to-the-test-of-writing-code/ Sun, 14 May 2023 13:00:43 +0000 https://thenewstack.io/?p=22707465

One Bay Area technical support specialist told me he’d had a secret advantage when a potential employer assigned a take-home

The post Developers Put AI Bots to the Test of Writing Code appeared first on The New Stack.

]]>

One Bay Area technical support specialist told me he’d had a secret advantage when a potential employer assigned a take-home programming problem. He’d used ChatGPT to generate a solution, then turned it in as his own work.

OpenAI reminds users that its ChatGPT is currently in a free “research preview” to “learn about its strengths and weaknesses.” And there are plenty of other options to explore as well.

The last month has also seen the launch of Hugging Face’s open source alternative, “HuggingChat” — and a set of dedicated coding tools like StarCoder Playground.

With so many AI-powered assistants waiting to be explored, we’ve now entered the phase where excited users try their own homegrown experiments — and share the results online.

Can these new AI-powered tools really generate code? With a few qualifications and caveats, the answer appears to be yes.

Informal Tests

It’s always been the provocative question lurking behind the arrival of powerful AI systems. In early 2022 Alphabet reported its DeepMind lab for AI research had created a computer programming system called “AlphaCode” which was already ranking “within the top 54%” of the coders competing on the site Codeforces. By November GitHub was experimenting with adding a voice interface to its impressive AI-powered pair programmer, Copilot.

But now the systems are facing some more informal tests.

Last month a game developer on the “Candlesan” YouTube channel shared ChatGPT’s efforts to recreate the popular mobile game Flappy Bird. While it took several iterations, the code was fully completed in about 90 minutes. It was written in C# in the Unity game engine — and even used the AI-generated art that the developer created using Midjourney.

The video hints at a possible future where developers use AI to get their work done faster.

“What I really like about this process is that while ChatGPT is taking care of the code, I get to focus my attention on design work,” explains the video’s enthusiastic game developer. “I get to position text elements on the screen, I decide the distance between the pipes, or the exact tuning numbers for how hard the bird flaps its wings.”

And in a later video, the same developer uses ChatGPT to code bots to play the game ChatGPT just built.

Acing the Coding Test

Can AI pass a professional coding test? Other experiments suggest the answer there is also “yes” — but not every AI system. One such test appeared last month on the tech site HackerNoon, when Seattle-based full-stack developer Jorge Villegas tested GPT-4, Claude+, Bard, and GitHub Co-Pilot on a practice exercise from the coding site Leetcode.com. Villegas distilled the question down to an unambiguous five-word prompt: “Solve Leetcode 214. Shortest Palindrome.”

Leetcode’s practice puzzle #214 challenges coders to look at a string, and change it into a palindrome (the shortest possible one) by only adding letters to the front of the string. “While I could have asked follow-up questions, I chose to only consider the initial response,” Villegas added.

It’s a tricky puzzle — and the results were some hits and some misses…

  • GPT-4 wrote code that passed all of Leetcode’s tests — and even ran faster than 47% of submissions to the site by (presumably human) users. Villegas’ only caveat was that GPT-4 is slower to respond than the other sites — and that using its API “is also a lot more expensive and costs could ramp up quickly.”
  • Villegas also tested the Claude+ “AI assistant” from Anthropic, a company describing itself as “an AI safety and research company” that builds “reliable, interpretable, and steerable AI systems.” But unfortunately, the code it produced failed all but one of Leetcode’s 121 tests.
  • Google’s “experimental AI service” Bard failed all but two of Leetcode’s 121 tests. (Although Bard’s code also contained a bug so obvious that Villegas felt compelled to correct it himself: The function needed Python’s self keyword to specify a namespace for the function’s variables.)
  • Villegas tested GitHub Copilot (asking the question by typing it as a comment in Microsoft’s Copilot-powered VSCode). And it passed every one of Leetcode’s tests — scoring better than 30% of submissions (from presumably human coders).

Villegas’s essay closes with an important caveat. “It is unclear whether any of these models were pre-trained on Leetcode data.” So in early May Villegas tried another more specialized test, using a slightly longer prompt that requested four different CSS features written with a specific framework.

“Create a header component using Tailwind CSS that includes a logo on the left, navigation links in the center, and a search bar on the right. Make the header dark purple.”

The results from GPT-4 “overall looks very good” and Claude+ made “a pretty good attempt,” while for Bard’s response, “the nav links have no space between them, the search bar is illegible against the background… I guess it still got the main parts of the prompt correct, all the content is in the correct order.” And Bing’s version of GPT-4 was the only one that actually got the navigation links in the center.

Villegas’s ultimate verdict is that AI-generated code lacks context-awareness, and “often lacks attention to detail and can result in design flaws. Additionally, AI still struggles with context awareness, and it can be challenging to provide precise instructions that an AI can follow accurately.

“These difficulties demonstrate that AI cannot replace human designers entirely but can be a valuable tool to assist them in their work.”

Plugins and PHP

ZDNet attempted some even more ambitious tests.

Senior contributing editor David Gewirtz had used ChatGPT back in February to generate a working WordPress plugin for his wife. It randomized items on a list — though a series of additional feature requests eventually tripped it up, with ChatGPT failing to sanitize the input when calling PHP within HTML.

While Gewirtz decided this was only coding at the “good enough” level, he also noted that what many clients actually want. This led Gewirtz to conclude that AI will “almost undoubtedly” reduce the number of human programming gigs, adding that even today AI is “definitely an option for quick and easy projects… this surge in high-quality generative AI has been startling to me.”

In April he’d tried the same test using Google’s Bard, but it generated a plugin that didn’t work. It just produced blank output rather than a list of names in random order. Bard also got tripped up when asked for a simple rewrite of an input checker so it would allow decimal values as well as integers (which would allow letters and symbols to be placed to the right of the decimal). And when testing both Bard and ChatGPT on some buggy PHP code, only ChatGPT correctly identified the flaw. “For the record, I looked at all three of Bard’s drafts for this answer, and they were all wrong.”

But then Gewirtz decided to push ChatGPT to write a “hello world” program in 12 different programming languages. Gewirtz used the top 12 most popular programming languages (as ranked by O’Reilly) — Java, Python, Rust, Go, C++, JavaScript, C#, C, TypeScript, R, Kotlin, and Scala — and ChatGPT dutifully complied (even providing the appropriate syntax coloring for them all).

David Gewirtz took ChatGPT through a history of programming languages dating as far back as the 1950s. And he described the results as “cool beyond belief.”

To make things more challenging, his prompt even requested different messages for the morning, evening, and afternoon. While Gewirtz didn’t run the code, “I did read through the generated code and — for most languages — the code looked good.” And a quick test of the JavaScript code shows it does indeed perform as expected.

Just for fun, Gewirtz also asked it to produce results using the legacy Forth programming language — and it did. So then in a later article, Gewirtz challenged ChatGPT to write code in 10 morerelatively obscure languages,” including Fortran, COBOL, Lisp, Algol, Simula, RPG (Report Program Generator), IBM’s BAL (Basic Assembly Language), and Xerox PARC’s Smalltalk.

In short, Gewirtz took ChatGPT through a history of programming languages dating as far back as the 1950s. And he described the results as “cool beyond belief.” Though he didn’t run the generated code, “most look right, and show the appropriate indicators telling us that the language presented is the language I asked for…”

ChatGPT even rose to Gewirtz’s challenge of writing code in another ancient language, APL, which sometimes uses a non-standard character set — though the font used to display its code transformed them into what Villegas calls “little glyphs.” As Google explains…

 Google's explanation for APL character sets

But perhaps the most thought-provoking result of all came when ChatGPT generated code in equally-ancient Prolog. This is especially notable because ChatGPT is written in Prolog — at least partially. Gewirtz notes that ChatGPT uses a mode that translates Prolog logical forms into sentences in natural language.

With so many examples of AI assistants already generating code, maybe it’s time to move on to the question of how they’ll ultimately be used. That is a question we’ll watching out for in the months and years to come.

The post Developers Put AI Bots to the Test of Writing Code appeared first on The New Stack.

]]>
Despite Economic Uncertainty, Tech Hiring Remains Robust https://thenewstack.io/despite-economic-uncertainty-2023-tech-hiring-remains-robust-linux-foundation/ Thu, 11 May 2023 16:03:18 +0000 https://thenewstack.io/?p=22707807

We already know that the economy is making organizations rethink their IT headcounts. But a new study shows increased willingness

The post Despite Economic Uncertainty, Tech Hiring Remains Robust appeared first on The New Stack.

]]>

We already know that the economy is making organizations rethink their IT headcounts. But a new study shows increased willingness to use training on new technologies as a way to retain current employees.

Fifty-nine percent of respondents to the new survey by Linux Foundation Training and Certification and Linux Foundation Research said their organizations changed their 2023 technical hiring plans due to economic conditions last year.

Despite high-profile layoffs by large tech companies over the past several months, 57% of organizations are adding workers this year, while 20% are cutting jobs 46% have enacted at least some hiring freezes. The total adds up to more than 100% because some companies are trying multiple approaches, hiring in some areas, while cutting in others.

Meanwhile, 50% said they are offering their workers training in new technologies in 2023 as a retention strategy, up from 40% in 2022. Overall, 70% of organizations offer training on new technologies.

The willingness to develop in-house talent also helped ease difficulties in hiring: When they couldn’t fill a position, 58% of survey participants said they trained existing staff on new technologies, while 38% hired consultants.

Findings from the report were gleaned from 418 IT hiring managers. Respondents were employed in a variety of industries, with 45% of participants coming from the U.S. or Canada, 25% from Europe (excluding Russia) 19% from India and the remaining from other locations.

A Snapshot of the Tech Job Market

Although a minority of organizations reported increasing their technical headcount in 2023, those gains were spread across all types of industries. Employers were most likely to add jobs in the areas of cloud/container technologies, cybersecurity and AI/machine learning. Here are the areas where organizations plan to add technical roles this year:

Among other findings:

  • Government, nonprofit/non-governmental organizations and academic institutions were least likely to report increasing their headcounts, with 45% saying they had added tech jobs.
  • Telecommunications and Internet service provider/web hosting companies were most likely to have added to their tech headcounts, with 71% saying they had done so.
  • The median percentage of tech staff laid off in 2022 was 20%. However, that figure was much higher — 35% — for technology vendors, suppliers and service providers.
  • Senior technical roles appeared most likely to be cut when organizations eliminate jobs, according to the report. Forty-five percent of respondents who said their company is eliminating jobs said senior technical roles are on the chopping block.

The report offered some optimism for late 2023 and next year. Among organizations that plan to add technical roles this year, 46% said they could foresee even more increased hiring later in 2023 and 38% said the same for 2024.

However, many organizations that have already made cuts were more pessimistic, with 56% saying they could foresee more tech staff reductions later this year or in 2024.

Recruitment Challenges

The report shed light on the difficulties in hiring and keeping technologists. Nearly one in three new hires — 29% — leave within six months of being onboarded.

The average time to fill a technical position is 4.3 months, according to the report, which also found that it takes an average of 2.6 months to onboard a new technical employee to the point where their team achieves normal productivity.

The post Despite Economic Uncertainty, Tech Hiring Remains Robust appeared first on The New Stack.

]]>
Kubernetes Community: A Guide to Open Source Localization https://thenewstack.io/kubernetes-community-a-guide-to-open-source-localization/ Tue, 09 May 2023 12:00:17 +0000 https://thenewstack.io/?p=22706396

One of the most consistent barriers to contributing to open source is the dominance of the English language. These highly

The post Kubernetes Community: A Guide to Open Source Localization appeared first on The New Stack.

]]>

One of the most consistent barriers to contributing to open source is the dominance of the English language. These highly distributed, remote teams rely on asynchronous communication for code, documentation, Slack channels, mailing lists and more, making it hard to even get started if you don’t have a confident English level.

Since open source contribution relies on a lot of unpaid labor, many continue to volunteer because they feel part of a community. Those with a higher level of English language confidence are proven to feel a greater sense of belonging and thus have a higher intrinsic motivation.

This means those that don’t read, write or speak English proficiently are cut out of the opportunity that is open source — from technical and project management experience to networking to work requirements. And open source communities can access far fewer users and contributors. A direct result is that open source contributors reside predominantly in the Global North, meaning those that could potentially benefit the most from reliable and free software, are kept out of it.

And, with tech being the future, and open source making up about 70% of stacks, the predominance of the English language means whole countries are being locked out of participating in building our shared future.

Thankfully, some communities are starting to understand this risk to open source sustainability and are making an effort to translate their documentation. But again this is a time-consuming, largely voluntary effort.

The Cloud Native Computing Foundation‘s largest project — Kubernetes — has successfully translated its core docs into 14 languages, with at least three more in the works. At the recent KubeCon+CloudNativeCon Europe, Divya Mohan and Natali Vlatko, two of the three co-chairs of the massive documentation special interest group or SIG Docs, outlined the process of dismantling this inclusion hurdle — while of course encouraging others to contribute to localization.

What Is Localization vs. Translation?

Mohan and Vlatko, along with Rey Lejano, are in charge of setting up the procedural, administration and technical approvals that are required around the documentation, which includes the whole Kubernetes website, reference docs, the blog and localization.

“We talk about translation, but it’s really more than that,” Vlatko underscored. “Localization is the act of translating and maintaining Kubernetes documentation into your native language.”

There’s an emphasis on “native here” because, she continued, “We really do rely on contributors who know how to translate a term that may actually have many words that could be used, many phrases that could be used in a certain translation. And we want our docs, which are used by people all around the world to actually learn about and use Kubernetes. We actually need them to be as technically accurate and then language-wise accurate as possible.”

That makes this a global project requiring a community of native speakers who understand the technology.

“Localization is not just about translation. It’s about community. It’s about doing a lot of work. But then it’s also about helping users adopt and welcoming them into your native community as well,” Vlatko continued.

It All Starts with a Community

The first step in open source localization is finding your community. “We need folks who are not only going to work together but actually approve each other’s stuff,” Vlatko said. With this in mind, the Kubernetes SIG Docs require a minimum of two owners for a localization to launch — already preparing to reduce the fragility of open source projects that have a single maintainer.

Then, to further reduce loneliness and to increase support, SIG Docs has created the Localization subgroup, which runs across languages and writing systems. Each localization subproject is then able to organize themselves as they see fit, in a way that’s most welcoming in their culture.

“So each of these subprojects has a different way of functioning,” Mohan later told The New Stack. “In turn, this also cascades to the various translations within the localization subproject as well. Each translation has a team and contributors that have different processes and meetings.”

With all languages, including English, Vlatko noted that building the community is not only the first step but the most challenging. After all, like all things open source, it relies on unpaid volunteers.

The localization subproject meets at 3 p.m. UTC the first Monday of each month. Notably, they follow a remote work best practice by speaking in the specific timezone UTC, which is both universal and doesn’t change with seasons. They also have asynchronous communication staples including a mailing list as well as a Slack community. The SIG also has an open agenda policy to allow for a more open, questioning culture.

What Are the Requirements to Get Started with Localization?

Community is the first but not the only requirement. You also already have to be an active Kubernetes organization member, before you can start your own community. That means you already understand and are committed to the project, but also are logistically able to review pull requests and take ownership of the work. This could be a technical contribution or contributing to another localization project including the English documentation.

Then once these standards of interest, community, and existing involvement are met, you can launch your localization. First, find your ISO 639-1 two-letter language code, which is used for the repository branch creation and naming your Slack channel.

Then, create a localization Slack channel so you are able to organize in your native language. The first thing that needs collaboration and localization of is the Kubernetes community code of conduct.

There are also other minimum required pieces of Kubernetes content for translation before the release of the project:

  • Home page
  • Setup
  • Tutorials, both Kubernetes Basics and Hello Minikube
  • All site strings
  • Releases

What Localizations Exist and How You Can Contribute Today

Currently, the Kubernetes SIG Docs is in 15 languages:

  • English
  • French
  • German
  • Hindi
  • Indonesian
  • Italian
  • Japanese
  • Korean
  • Mandarin
  • Polish
  • Brazilian Portuguese
  • Russian
  • Spanish
  • Ukrainian
  • Vietnamese

There are also existing subgroups in Bulgarian, Tamil, and Turkish, which hope to release localized docs and websites in their languages too.

Not to make English the default language, Mohan pointed out that English is also a localization. “If you are already familiar with Kubernetes or if you’re even just getting started, we really appreciate your points of view on how we could make the documentation better, whether that’s clarifying how a particular concept is explained or making a minor typo edit, it’s highly appreciated,” she said.

To get started contributing, you must sign the Kubernetes contributor license agreement. Then, you can join a community, best kicked off via the respective Slack channels, for both existing and upcoming localizations.

Then join both the SIG Docs and SIG Docs localization mailing lists. And attend those monthly meetings.

“Those are really good avenues to clarify your doubts. Because it’s a group of folks who are already working on the same stuff,” Mohan remarked.

This is a massive project, she commented, so start by posting on the SIG Docs Localizations Slack channel. Just don’t ask your questions in private, she recommends, as you run the risk of inundating the localization leads, while also not giving everyone the opportunity to respond. Plus, there can be doubts others share.

The mailing list, bi-weekly Tuesday SIG Docs meeting, as well as the fourth Wednesday of every month, has an APAC meeting.

Each localization project has about 25 to 50 contributors, Mohan estimates, but some languages manage to function with less.

“Most projects I know need help,” she said. It’s encouraged, if you’re interested in Kubernetes localization for your native language or find benefit from Kubernetes in some way, that you volunteer time and give back.

Hindi: A Kubernetes Localization Case Study

“Finding your community. Finding your tribe to build a localization is one of the most challenging aspects,” Mohan said, having learned it the hard way kicking off Hindi, the newest completed localization. This is the first localization in the Devanagari script, which is used for Sanskrit, Hindi and other Indian languages. The Hindi localization team now has six leaders, including two that are regular contributors over the past year.

The Hindi effort kicked off in late 2021 and launched at the end of August 2022. There are currently 245 Hindi speakers active in the respective Slack channel.

Localization into Hindi means opening up a language spoken by more than half a billion people.

“They’re still ongoing because docs are never done. They update every release cycle, and tracking those changes is a lot of manual effort currently. The people leading the localization are required to actively track the docs that change per cycle and put out issues for them, ask contributors to come and chip in, and this is an ongoing effort that doesn’t stop at the point the localization goes live,” commented Mohan.

The post Kubernetes Community: A Guide to Open Source Localization appeared first on The New Stack.

]]>
How Implicit Bias Impacts Open Source Diversity and Inclusion https://thenewstack.io/how-implicit-bias-impacts-open-source-diversity-and-inclusion/ Fri, 05 May 2023 18:00:15 +0000 https://thenewstack.io/?p=22705972

“Although these unconscious biases are often not intended, the harm of them affects not only the individuals but the community

The post How Implicit Bias Impacts Open Source Diversity and Inclusion appeared first on The New Stack.

]]>

“Although these unconscious biases are often not intended, the harm of them affects not only the individuals but the community at large.”

Explicit biases are easier to identify and call out. But, in a community, the implicit biases that developer advocate and technical writer Anita Ihuman speaks of act as death by a thousand cuts.

The lead of the diversity and inclusion badging review process for the CHAOSS project spoke at KubeCon+CloudNativeCon Europe virtually from her home in Nigeria, in order to address these biases that permeate open source. After all, we know that open source has far less diversity than the tech industry as a whole.

Hopefully, open source leaders and contributors will learn from Ihuman’s advice to both recognize unconscious biases as they inevitably crop up in their communities, as well as have the capabilities to address them.

Why Diversity and Inclusion Matters in Open Source

“In open source, when we’re referring to diversity and inclusion, we’re referring to an environment where everyone, regardless of their beliefs, race, background, nationality and appearance, feels equally welcome to participate, feel equally welcome to interact with other persons, and also make good impacts or contributions within the communities that they are actually a part of,” Ihuman explained.

Open source is built on the collaborative efforts of different individuals, so this really matters.

Diversity and inclusion within open source communities is linked to several benefits including:

  • Project outreach to new contributors
  • Project sustainability
  • Increased productivity in team performance
  • More diverse and creative skills
  • Diverse perspectives drive innovation

But then, knowing these benefits, why do open source communities continue to receive feedback from those underrepresented that they don’t feel they can actively participate? What are the challenges they face?

The Different Types of Bias to Battle

Bias, Ihuman explains, is a prejudice held by an individual, group or institution, in favor of or against another individual, group or institution. It usually involves a comparison, in a way that’s considered unfair, causing negative or positive effects on the targets. Biases can be explicit or conscious, as well as often unconscious or implicit bias. The latter is the focus of Ihuman’s talk, those implicit biases that have been implanted by our individual and collective experiences and stereotypes.

“In most open source communities, there was a trend of domination of Western male developers, compared to women, people of color, parents, non-technical contributors, physically challenged people, people from marginalized groups and those who are not experts in the developer fields,” Ihuman observed.

In fact, in still the most in-depth open source diversity survey of its kind, the 2017 Github survey found that 60% of those from marginalized groups “indicated interest that they would love to contribute to open source projects, however, they are significantly less likely to because of the reception or the way things are carried out in the individual communities,” Ihuman cited in her talk.

Most of the time, people from these groups are motivated to participate and projects are motivated to receive more participants, but somehow there’s a disconnect in open source communities when it comes to recruiting new contributors that aren’t white and male.

Kinds of Implicit Bias Pervading Open Source Communities

To create a baseline definition and to help KubeCon attendees recognize their own unconscious prejudices, Ihuman broke down the many kinds of implicit bias that are both inherent to humankind and inherent to open source communities.

Halo Effect – the tendency of treating people positively in a way that influences judgments. This halo effect and all the positive news you heard about someone or a group, Ihuman said, overshadows any reality or negative impressions they’ve made on you or anyone around you. As we’ve already talked about, this halo effect can pose a real threat to security, as it can distract from signs of malicious injections made under the guise of preferred, known contributors.

Horn Effect – The antonym of the halo effect, it is by default thinking negative things about a person or group because of all the negative things you’ve heard before.

Confirmation Bias – when we favor information that fits in with our pre-existing beliefs, regardless of the truth. Ihuman gave the example of her own confirmation bias she got from TV that all Asians are better at math and science.

Gender Bias – Tendency to prefer one gender over another, like in politics when masculine is preferred.

Affinity Bias – The tendency for people to connect with those who share similar backgrounds, experiences and instances. Self-segregation by race is a common, visible example. As we’ve already written about, affinity bias runs rampant in open source communities — and the tech industry as a whole — which finds cis-gendered white men aligning with other cis-gendered white men.

Name Bias – Placing judgment on how a name is spelled or pronounced, typically showing a preference for Anglo-Saxon names. Name bias creeps into many hiring processes, as well as could affect how maintainers consider pull requests.

Appearance Bias – Favoring someone based on certain physical features that are typically similar to your group. These beauty standards are typically grounded in white supremacy.

Sexuality Bias – Another name for heterosexism, this means treating people differently based on their sexual orientation. “Sometimes people in open source say they do not actually own up to their sexuality in communities. They just make the contributions and go, simply because they have done this in the past and the treatment they received was not fair, and so they did not want to repeat this same experience.” Ihuman continued that this happens often, but other community members remain unaware of it.

Conformity Bias – Behaving similar to and agreeing with those within our group, even if it contradicts our own opinion or best judgment. Conformity bias amplifies the other types of implicit bias.

“Even if it contradicts our opinions or our beliefs, we still tune in anyway. A simple instance is where we always go with how the majority votes, regardless of what the results might be, always going with the group with the highest number of votes, the group with the largest say, the group that actually takes the largest share,” Ihuman said. This is usually the most represented group.

Signs of these different kinds of implicit bias that often people are unaware of but that often pervade communication in open source communities include:

  • Microaggressions
  • Unequal treatment
  • Assumptions and stereotyping
  • Double standards

The Impact of Implicit Bias

Implicit bias thwarts the nature of open source because open source is a place where we have to rely on the collaboration of different individuals and different groups. If we eventually let this implicit bias take the best of us, we definitely block out the ways for other people to get on board and try to collaborate on a particular project.

Implicit bias has people following specific patterns, Ihuman warned, like when a community hosts an event and, year after year, follows the same format to recruit speakers. Manels run rampant.

Any kind of implicit bias can create unfair disadvantages, like overlooking people from other groups for a particular position or opportunity within the community or the workplace.

Implicit bias also has a negative effect not only on an open source community but on those with these inherent prejudices themselves because, Ihuman says, it blinds people to the creative and innovative ideas of others.

“For instance, you are building software and you do not consider the accessibility. But someone who has some form of a neurological challenge notices accessibility issues within your software,” and easily points to a solution, Ihuman said. But the inherent bias within may have project leadership to disregard this need or solution.

Overall, these biases can contribute to unhealthy environments. And people talk, sharing their negative experiences. The perception of a community can change quickly, based even on just one person’s account.

Without a strategy with inclusive practices within an open source project, communities fail to reap the benefits of all forms of diversity and even fail to notice that potential.

“We should care because we are humans and there is definitely going to be those differences among us, no matter where we are,” Ihuman said, arguing that open source communities and especially leadership have to work on their communication to support and even honor those differences.

This inclusive mindset and action plan, she continued, is essential in maintaining a community-first mentality, where empathy is essential. After all, open source has more eyes and more ears than most other projects, which means the individual effort and the individual experience matters, maybe even more.

“It is impossible to actually have a healthy community without taking note of some of the things that are maybe affecting the health of that particular community at large,” Ihuman said.

“It affects us all,” she continued. And experiencing this implicit bias negatively affects everyone.

How to Address Implicit Bias within Your Community

Awareness is of course the first step to addressing biases within the communities you’re a part of. Ihuman recommends kicking off this journey of self-exploration by taking the Implicit Association Test or IAT.

Challenges will arise in any community. As an open source leader, examine how the interaction went down and how both sides react. Evaluate, she said, if there is a way to remedy the situation.

“A lot of times, perpetrators are unaware of their actions, so it’s very good to educate your community about these biases,” Ihuman continued. And encourage team members to speak up when they feel harmed by biases or have witnessed harm against other community members. Aim to settle it amicably and privately, when possible, she recommends, citing Conventional Comments as a good way to hone feedback-giving skills.

Open source communities, she recommends, should set diversity and inclusion goals and be intentional about them. The open source project she works on CHAOSS is dedicated to community health analytics. Let data inform your decisions, she continued.

One of the most important pillars of open source diversity and inclusion is the code of conduct. Ihuman recommends to even go beyond that code of conduct. These guidelines should include explicit examples of what should and should not occur within the community and include repercussions if they do. When members speak up about negative experiences, there must be consequences.

Remind community members of the code of conduct regularly, not just a box to check before the first contribution.

Finally, Ihuman recommends taking the lead by creating diversity and inclusion focus groups. Concentrate on measurable, incremental improvements that contribute to project sustainability:

  • Talent retention
  • Contributor satisfaction
  • Productivity

Just make sure that your D&I strategy is not generalized, but rather customized for your community and the sub-groups within it.

“Attaining virtual workplace diversity and inclusion takes a group effort. It doesn’t take one person to ship all of this. But if every single person makes an effort, at the end of the day, we’re definitely going to get a more inclusive, welcoming open source community,” Ihuman closed with.

The post How Implicit Bias Impacts Open Source Diversity and Inclusion appeared first on The New Stack.

]]>
What Is Service Ownership? https://thenewstack.io/what-is-service-ownership/ Fri, 05 May 2023 15:41:25 +0000 https://thenewstack.io/?p=22707195

Chances are your organization is working on a number of digital transformation initiatives. It’s no surprise because today’s customers want

The post What Is Service Ownership? appeared first on The New Stack.

]]>

Chances are your organization is working on a number of digital transformation initiatives. It’s no surprise because today’s customers want high-quality, always-on customer experiences, which are leading to increasingly complex digital environments.

System failures are common occurrences. But downtime, outages or slowdowns can negatively affect a customer’s impression of your company, so how you deal with incidents matters. But the more complex your systems are, the harder it is to get them fixed quickly and efficiently.

Service ownership enables your organization to do this better by providing clear ownership, including code in production, and accountability for all the digital services you run.

Q: How is digital transformation changing the way businesses run?

We’re all aware just how much customer expectations have changed over the last few years. If a digital service won’t work, then customers are more than willing to switch to a rival. Some may never come back at all.

This challenge is falling to technical teams to solve. But the huge amount of digital transformation and adoption of hybrid and multicloud environments is resulting in increasingly complex digital operations infrastructures. This is further compounded by the effects of decentralizing teams into lines of business, each with their own tools and workflows. There’s less visibility across teams, and it’s restricting collaboration.

Services will fail; that’s a fact. What matters is how your company responds when it happens, and you can’t do this efficiently or effectively without a centralized view. Siloed systems and teams hurt customer experience and put your business at risk.

Q: What is service ownership?

At its simplest, service ownership is about “you code it, you own it.” It’s an operating model where the people creating and delivering software take responsibility at every stage of the software’s life cycle. In practice, service ownership means that the people involved in designing and coding a product (or software or service) continue to support it once delivered.

Service ownership reallocates workloads to the most appropriate person to manage them. It reduces the burden on traditional support teams by making engineers own their services in production. It can dramatically speed up time-to-fix, a key performance indicator for teams, but can also improve the speed at which organizations ship new features.

It’s incredibly logical when you think about it. If you’re the person who understands the code, knows what its dependencies are and what relies on it, then you’re ideally placed to solve new and unexpected problems effectively and with very little guesswork. You can also work quickly to make changes without worrying about unintended consequences.

Q: What are the benefits of service ownership?

Moving to a service ownership model gives people clear responsibilities across your services. It encourages ownership and accountability for performance as well as for fixing things when they go wrong. There are three key benefits:

  1. A far better experience for your customers. Service ownership puts developers much closer to the people they’re ultimately working for, the customer. This means they can better see the impact of their work, not only on customers but also the business. It’s more motivating for them, but it also speeds up fixes or updates, because developers see the issues themselves, rather than relying on a second-hand briefing or a support ticket.
  2. You can focus on what you deliver, not the organizational structure behind it. When you own their code end-to-end, you’re creating an automatic quality control loop: No one wants to be disturbed out of hours because they didn’t check something properly. But an added advantage is that you’re creating a personal connection that can outlive organizational changes. It will always be clear who is responsible for what, even if the company hierarchy changes.
  3. A significant reduction in mean-time-to-resolution (MTTR). When a service’s developer is first-line support for their own code, fixes happen quicker. Fewer staff need to be involved, and there’s no need for handovers from first-line responders, which can introduce added risks. It quite simply minimizes the impact on your customers and your teams, and vastly speeds up MTTR.

Q: How can organizations navigate this cultural shift?

Shifting to service ownership is not just a case of simply saying to engineers, “You’re responsible for your code in production now.” You’ll need organizational buy-in, supported by senior managers, and a robust change management program.”

For service ownership, as for any significant organizational change, it’s a good idea to start from a position of shared responsibility and compassion. Developers may feel that being called to fix something means they’re being blamed for causing the issue in the first place. It can take time to achieve “blamelessness.”

You might also find that resistance comes from fear of the impact, and with service ownership, that’s often the case in a central operations team. We know that being able to clearly articulate the benefits can help here. So, we would usually find ways to discuss things like the increase that teams will see in visibility and pipeline control as they move to the cloud, or the reduction in manual work — and increase in productivity — that comes from automation, and even about how security and governance are easier to manage.

Q: What steps should organizations take to activate service ownership?

We have seen that it can be difficult for organizations to get started with service ownership, especially as it does involve such a cultural shift. There’s also a real fear of failure in some organizations that can be hard to overcome.

However, it’s worth remembering that many organizations have already taken this journey, so there are a lot of best practices out there that are ready to adopt. These are some of the best practices PagerDuty has gathered in helping customers get started on the journey:

  1. Stay agile. Agile-type workflows can help teams to identify things that are going well and potential blockers. These can be essential when implementing a new culture and keep teams on track to longer-term goals.
  2. Start small. It’s well worth choosing a noncritical production system to demonstrate the value you can see from service ownership and measure a baseline to help show improvement, particularly to get the executive support you’ll need to successfully implement change. Measure a baseline of performance for your current production system.
  3. Don’t play the blame game. Mistakes are inevitable, but people have to be empowered to make decisions and experiment, and they can’t do that if they’re afraid of retribution for making the wrong choices.
  4. Define services clearly. Services should be set up in a granular way to help identify the sources of problems, and dependencies should be documented to help define roles and responsibilities so incidents and actions don’t fall through the cracks.

These are just a few, but there’s a lot moreout there, from getting the right size teams, to which projects you should prioritize for service ownership and how to document what you’re doing.

Reaping the Benefits of Service Ownership

Service ownership is about shared responsibility across your organization. Yes, you’re asking a developer to be the person who ultimately fixes the code in a failed service, but they can’t do that without an organization-wide adoption of the service ownership approach. You also need the right tools to make it happen.

There may be reluctance from business teams to adopt service ownership. The benefits may not be clear to them, they might be worried about blame, or perhaps they simply don’t understand how it would work. PagerDuty Operations Guides provide best practice solutions and has frameworks available to support organizations on this journey so that they can quickly make that cultural shift and start to see the benefits, for their business and their customers.

The post What Is Service Ownership? appeared first on The New Stack.

]]>
Entrepreneurship for Engineers: Do You Need to Pivot?  https://thenewstack.io/entrepreneurship-for-engineers-do-you-need-to-pivot/ Fri, 05 May 2023 12:00:38 +0000 https://thenewstack.io/?p=22707018

Startups are constantly iterating, changing their communication strategy, sales strategy, sometimes even product strategy. But sometimes founders need to make

The post Entrepreneurship for Engineers: Do You Need to Pivot?  appeared first on The New Stack.

]]>

Startups are constantly iterating, changing their communication strategy, sales strategy, sometimes even product strategy. But sometimes founders need to make a pivot that fundamentally shifts what the company is all about.

The sign that your company needs to make a pivot is simple, according to Jon Nordmark, co-founder and CEO of iterate.ai, an enterprise low code platform: “No one is buying your product.”

If no one is buying your product, it’s obvious that something needs to change. But when you drill down, it can be challenging to figure out exactly why your product isn’t catching on.

“There are many moving parts,” Nordmark said. “Identifying what the problem is, is hard.”

Ultimately, if your startup seems to be heading for trouble — particularly if it seems at risk of running out of money — you should pivot something. But do you keep your product the same and change your messaging? Or change your marketing and sales tactics? Or fundamentally change your entire company’s purpose?

I spoke with two entrepreneurs with experience with that kind of pivot about how to spot the need for one, how to execute it, and more.

Technology Woes

When Ian Tien, CEO and co-founder of secure collaboration platform Mattermost, first got into the startup accelerator Y Combinator, it was for a company making a video game engine. The problem was, the technology worked great on the web and iOS, but it didn’t work well on low-end Android devices — and for a game engine, that was a fatal flaw.

But he found that some of the test games his startup produced were popular. So the company became a game studio. But making games isn’t scalable — it’s too much of a hits-driven business.

By the fourth and fifth game, it was clear to Tien that the business should pivot. “Our pivot came from a video game business to an enterprise software business,” he said.

So Mattermost, as it is today, was the third iteration of the company, and both pivots were fairly massive.

Where Do You Go?

In the case of Mattermost, the current business came out of the team’s frustration with its collaboration platform, which not only was buggy and prone to errors, but wouldn’t let the team export its data.

When the team released Mattermost as an open source project, enterprises started coming to the company and asking for a more “enterprise-ready” version, one that was more scalable and had more security controls.

“That pull from the market was a strong signal that there was something extraordinary to build here,” Tien said.

In most successful pivots there will be both a push and pull — a clear signal that the existing business strategy wasn’t working coupled with signs that another business might be more successful.

Convincing Your Investors

In Mattermost’s case, early investors and advisers were on board with transitioning from a game studio to an enterprise software company. But that’s not always the case.

Sometimes, said Nordmark, you have to persuade your funders and board members to make a change.

“A lot of times when they invest, you may not have product-market fit yet, it’s more of a theory,” he said. “It gets really hard to change because you have to convince your board that you’ve made a mistake.”

When Pivots Go Wrong

On the other hand, sometimes pivots can be driven by investors, and they can end up diluting the brand.

In 1998, Nordmark started eBags.com, a website that sold luggage. He and his co-founders had experience and connections in the luggage industry and were able to leverage that experience in the new company.

But then, as a way to expand, and under pressure from investors, they started selling handbags, and then shoes as well. This was a pivot of sorts, but it complicated things significantly, because everything was different about selling shoes, from the suppliers to the purchasing cycle.

Eventually, the team went back to selling luggage, only after learning the hard way that its competitive advantage was as a travel company, not an accessory company.

Staying Nimble

Pivoting isn’t easy, but both Nordmark and Tien said that every startup is likely to undergo some kind of pivot, even if it isn’t a drastic one.

“If your identity is based on the context of an early-stage startup in a specific space, that’s going to be a little precarious because you’re going to learn a lot through customer discovery and you’re going to change a lot,” Tien said.

Sometimes this can mean redefining for yourself what the journey you’re on is all about.

“What I realize is my career is not really about open source or about video games or about Microsoft Office,” Tien said. “My career is really about communities, whether that community is the Microsoft community and IT professionals, or video games and the social games we produced, or it’s open source and having communities of contributors build something really wonderful together.”

The post Entrepreneurship for Engineers: Do You Need to Pivot?  appeared first on The New Stack.

]]>
How Teleport’s Leader Transitioned from Engineer to CEO https://thenewstack.io/how-teleports-leader-transitioned-from-engineer-to-ceo/ Thu, 04 May 2023 17:14:18 +0000 https://thenewstack.io/?p=22706290

The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet

The post How Teleport’s Leader Transitioned from Engineer to CEO appeared first on The New Stack.

]]>

The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet Union.

“When I was a kid, when I saw an airplane flying over, I was having a really hard time not stopping and staring at it until it’s gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founder Odyssey podcast series. “I really wanted to figure out how to make it fly.”

Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You’re a teenager, no one really listens to you yet, but you tell a computer to go print number 10 … and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends machines to your will.”

In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts Colleen Coll and Heather Joslyn.

Teleport, which was co-founded by Taylor Wakefield and Alexander Klizhentas, began in 2015 as a company called Gravitational. In May 2022 it announced it had raised $110 million in a Series C funding round, bringing its total raised to $169 million — and its post-money valuation to $1.1 billion.

This funding round included participation from Insight Partners; The New Stack is a wholly-owned subsidiary of Insight.

‘My Customers Are Other Engineers’

While the former USSR was “a fairly dark place to grow up,” Kontsevoy said, it was a culture in which science and technology were revered. The tech founder still carries that spirit with him, he said:  “All of my childhood heroes, they’re mostly engineers and scientists.”

A post-college job at National Instruments, based in Austin, Texas, gave him lots of interaction with other engineers, and a desire to surround himself with people from that world.

At trade shows and conferences, Kontsevoy said, other engineers “would ask you questions about the product that you built. And you get to ask them questions about what they’re building. And I would meet petroleum engineers, aerospace engineers, I would meet the engineers that built all the things around that surrounds us. It was fascinating. You get to learn how the world works.”

As a result, he added, “I wanted to have a job where my customers were other engineers. I wanted to be connected to this kind of broader community.”

That desire, he said, led him to entrepreneurship:  “I was so focused on building something that other engineers would love to use.”

An earlier startup, Mailgun, which was required by Rackspace in 2010, proved to be a learned experience, he said.

“I never transitioned from being an engineer to being a CEO. So I kind of got stuck in this cycle of talk to customers, go build. And I started to neglect that the company itself needs building.”

Kontsevoy realized, he said, “that I needed to step back and learn how to be CEO before I start another company again.”

Learning How to Be the Boss

Teaching himself how to run a company took time and a change in mindset, away from coding and toward more delegation to his team, with more focus on issues like scaling the business.

“If I told you that I never Googled ‘how to be a CEO,’ I’d be lying,” he said.

While acknowledging that learning leadership skills is a personal journey that will be different for everyone who undertakes it, Kontsevoy said three things helped him: books, meeting people who had expertise he didn’t have, and mentorship.

About leadership and management books, he advised listeners to “try to spot patterns. Because not every piece of advice is applicable to you … advice generally has limited utility because it has an expiration date. And it is also contextual; what worked for a person in certain circumstances back then might not necessarily apply to you.”

He praised the mentorship of leaders from Rackspace, the company that acquired his first startup, as being helpful in guiding his journey.

“CEO is a lonely job,” he acknowledged. “You can’t really complain too much. There is a lot on your mind that it’s not OK to share with other people until we develop a sense of clarity. So having someone that you can be relaxed about asking for advice and bounce ideas off, is incredibly helpful. I do recommend first-time CEOs to have someone who’s been in their shoes that they can actually be open with.”

Check out the full episode for more on Kontsevoy, how he works with his co-founders, raising money, finding product fit and establishing an organizational culture.

Like this episode? Here are more from The Tech Founder Odyssey series:

How 2 Founders Sold Their Startup to Aqua Security in a Year

How Solvo’s Co-Founder Got the ‘Guts’ to Be an Entrepreneur

Feature Flags Are Not Just for Devs

The post How Teleport’s Leader Transitioned from Engineer to CEO appeared first on The New Stack.

]]>