Learning to code will not save your kids

For the past few decades, anxious parents, educators, and politicians have latched onto the idea that teaching kids to code would be a surefire way to prepare them for “the workforce of tomorrow.” But artificial intelligence is now starting to slowly but surely deflate the economic life preserver that coding was supposed to represent.

At first, this may seem counterintuitive. After all, A.I. is just software, and someone still has to write that software, right? Well, the answer is increasingly likely to be no.

Last year, Microsoft teamed with the San Francisco research lab in which it owns a big stake, OpenAI, to create a feature for its GitHub code repository called Copilot that can automatically suggest the next best line of code for a program, or the best way to complete a line that a human coder has started to craft. That wasn’t going to displace coders any more than autocomplete in Microsoft Word displaces novelists. But it was a harbinger of things to come. Last week, DeepMind unveiled A.I. software it calls AlphaCode that can construct whole programs to complete novel tasks as well as an average human coder.

Yep, that’s right, you can simply give AlphaCode a problem to solve, using normal, natural language, and the A.I. software will generate code to successfully solve that problem. DeepMind tested its software using 10 recent coding competitions from a platform called Codeforces. It regularly attracts thousands of coders to its contests, which are sometimes used by tech companies to screen job applicants. AlphaCode scored about 54%, placing it in the median of all competitors in those particular contests, although still far from the level of the best human competitors.

The way the system solves these problems is also much less efficient than how a human coder would go about it. AlphaCode generates several hundred possible solutions for each problem, many of which wind up being incorrect, and then narrows those that seem to be correct down to a smaller set of about 10 that it submitted to the contest. The AlphaCode algorithm is also very large—taking in more than 41 billion variables—and expensive to train and run. And, for now, human experts are still needed to help verify that the code works well and that it doesn’t contain any gaping security holes that could easily be exploited by hackers.

Despite these drawbacks, systems such as AlphaCode are still likely to spell the beginning of the end of the need for vast armies of human coders. Once trained, the marginal cost of having AlphaCode generate a program will, despite its shortcomings, still probably be considerably less than the marginal cost of hiring a team of human coders to do the same thing. And it is the nature of A.I. algorithms that succeeding generations of the software will likely get both better in absolute terms and more efficient in economic terms. The quality control checks that today must be done by human coders will likely be automated in the not-too-distant future.

This is not to say that coding isn’t valuable. It teaches critical thinking skills, problem solving, logic, and a certain kind of creativity—as well as helping to complement learning in areas such as mathematics. But parents and educators should no more rely on coding as a sure ticket to a well-paying job than, say, Latin.

I have been following the development of A.I. for almost a decade, and I am convinced that the threat A.I. poses to jobs is real, although those jobs are likely to be displaced over a longer time period than some technologists have warned. In the long term though, there will be substantially fewer jobs per unit of productivity across a wide range of industries, including many white-collar professions, such as law and accounting, which were once seen as guaranteed paths to at least middle-class or upper-middle-class incomes.

To really prepare for the workforce of tomorrow, parents and educators should focus on ensuring kids are armed not with any job-specific qualification, such as coding. Instead they should be more concerned that children master high-level skills that will remain difficult for A.I. to automate. Such skills include critical thinking (yes, DeepMind says AlphaCode needed to exhibit some critical thinking to compete in the coding contest, but for now, the sort of critical thinking needed to read a scientific paper and determine its flaws, or to develop a novel constitutional law argument, is beyond the abilities of A.I.). They also include those things are uniquely human: emotional intelligence, human touch, and creativity. Fields that require high levels of these skills are the least likely to see major job losses as a result of A.I., at least in the next few decades.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

U.S. lawmakers have introduced new legislation on algorithms. The Algorithmic Accountability Act, an updated version of a similar bill introduced in 2019, would require companies using algorithms to carry out impact assessments for bias, effectiveness, and other factors, when automated systems are being used to make critical decisions. The proposed law would also create a repository for algorithms at the Federal Trade Commission and give the FTC 75 additional staff to help enforce the new law. The bill has been proposed by a group of Democratic legislators, including Sen. Ron Wyden (D-Ore.), Sen. Cory Booker (D-N.J.), and Rep. Yvette Clarke (D-N.Y.), but has no Republican cosponsors so far.

Meanwhile, the U.S. and EU are converging on A.I. regulation. That's according to a report from the centrist Washington, D.C., think tank Brookings. The report’s author, Brookings fellow Alex Engler, points to the Biden administration’s push for more regulation of A.I. and algorithms at a host of U.S. federal agencies, as well as the inclusion of A.I. in talks between the EU and the U.S. under the auspices of the EU-US Trade and Technology Council. 

“Neural rendering” is poised to become a big business. A story in tech magazine Wired details the rapid advances in a method called “neural rendering” that involves using a neural network, that kind of A.I. loosely inspired by how the brain works, to take two-dimensional images and predict what they will look like in three-dimensions. Wired says the advance “has the potential to shake up video gamesvirtual realityrobotics, and autonomous driving. Some experts believe it might even help machines perceive and reason about the world in a more intelligent—or at least humanlike—way.”

India has a new supercomputer. The Param Pravega, the new high-performance computer installed by the Indian Institute of Science, has a total supercomputing capacity of 3.3 petaFLOPS, according to technology publication The Register, making it one of the most powerful supercomputers in South Asia and about the 160th most powerful in the world. The computer is part of a government initiative to install 70 linked supercomputers to handle the country’s research needs, including running advanced machine learning for modeling the spread of “infectious disease, including COVID-19, genomics, climate modeling, weather predictions, as well as telecom networks and anything else that needs big brains to calculate.”

EYE ON A.I. TALENT

Alex Hanna has joined the Distributed A.I. Research (DAIR) Institute as director of research, according to a blog post written by Hanna. Hanna was formerly a researcher of A.I. ethics at Google AI Research, where she worked under Timnit Gebru, the former Google A.I. ethics team co-leader whose firing in 2020 caused an uproar, and who last year founded DAIR. 

Aquabyte, a company with offices in Bergen, Norway, and San Francisco that uses A.I. to help fish farms monitor their stocks, including detecting sea lice infestations and estimating the mass of the fish, has hired Janet Lin Lawson to be its chief financial officer, according to a story in trade magazine IntraFish. Lawson was previously CFO for tech company OverOps.

Arm, the computer chip design firm based in Cambridge, England, that is owned by SoftBank, has named Rene Haas its new chief executive, following the collapse of Nvidia’s effort to purchase the company. Haas, who had been serving as the leader of Arm’s IP Products Group since 2017, replaces longtime Arm CEO Simon Segars, who is stepping down. The company says that Haas will prepare to take Arm public.

EYE ON A.I. RESEARCH

Fighting fire with fire when it comes to toxic language models. Large language models are all the rage these days. That’s because they can accomplish a range of commercial natural language processing tasks—summarization, question answering, translation, and dialogues—all from a single algorithm with only a little bit of task-specific training. But these large language models, such as OpenAI’s GPT-3 or Google’s BERT or DeepMind’s Gopher, have a well-known problem. Because they are initially trained on a vast amount of data scraped from the internet, they also ingest a lot of humanity’s worst language: hate speech, racism, sexism, and misogyny. They can also ingest a lot of personal information (such as phone numbers, addresses, emails, etc.) and then accidentally leak that information to the outside world in response to prompts. Testing for all of these biases and potential problems is not easy: So far, the only approach has been to hire teams of human testers. And that method is both somewhat arbitrary and hard to scale up. 

A group of researchers at DeepMind, however, this week proposed using a large language A.I. system to “red team” or probe other large language A.I. systems for these biases. In this case, a large language A.I. is trained with reinforcement learning to generate the dialogue prompts that are most likely to solicit toxic language or the inadvertent disclosure of training data, including people’s personal information, from the language A.I. it is testing. As the researchers write, “Red teaming with LMs [short for language models] is useful for preemptively discovering a variety of harmful LM behaviors: insults to users, generated sexual content, discrimination against certain groups of people, private data leakage, out-of-context contact info generation, and more. However, our work also suggests a troubling way in which adversaries may misuse LMs: to attack commercial LMs in a large-scale, automated way.”

FORTUNE ON A.I.

The labor shortage is bringing Blade Runner to bartending as A.I. and robots start taking your drink order—by Tristan Bove

Elon Musk’s brain-implant startup Neuralink may have misled regulators about Musk’s leadership role—by Jeremy Kahn

Commentary: The Data Imperative: How companies can capitalize on the data they’ve collected during COVID—by Alex Holt and Mark Gibson

Meta threatens to pull the plug on Facebook and Instagram in Europe over data privacy dispute—by Christiaan Hetzner and Jeremy Kahn

Commentary: Society won’t trust A.I. until business earns that trust—by François Candelon, Rodolphe Charme di Carlo, and Steven D. Mills

BRAIN FOOD

Explainable A.I. has a serious “disagreement problem.” Making A.I. systems whose decisions are readily interpretable to humans is of the utmost importance if people are going to trust the recommendations of such software, especially in high-stakes settings such as health care, finance, and defense. Yet creating a way to explain the decision-making of a complex algorithm that can suss out subtle correlations between billions of variables, as some of large neural network–based A.I. systems now can, remains a major research challenge. And now A.I. researchers from Harvard University, Carnegie Mellon University, Drexel University, and M.I.T., have added a further complication: They have discovered that many of the supposedly best methods developed so far for explaining why an A.I. system has arrived at a certain result will disagree on that explanation. What’s more, the people using such methods in real-world setting don’t really have a great way of dealing with that disagreement. In a paper on this “disagreement problem” published in the non–peer reviewed research repository arxiv.org, the researchers conclude alarmingly:

Our results indicate that state-of-the-art explanation methods often disagree in terms of the explanations they output. Worse yet, there do not seem to be any principled, well-established approaches that machine learning practitioners employ to resolve these disagreements, which in turn implies that they may be relying on misleading explanations to make critical decisions such as which models to deploy in the real world.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.