|My 13 favorite AI stories of 2022 | The AI Beat|
VentureBeat26 day(s) ago
Last week was a relatively quiet one in the artificial intelligence (AI) universe. I was grateful — honestly, a brief respite from the incessant stream of news was more than welcome. As I rev up for all things AI in 2023, I wanted to take a quick look back at my favorite stories, large and small, that I covered in 2022 — starting with my first few weeks at VentureBeat back in April. In April 2022, emotions were running high around the evolution and use of emotion artificial intelligence (AI), which includes technologies such as voice-based emotion analysis and computer vision-based facial expression detection. For example, Uniphore, a conversational AI company enjoying unicorn status after announcing $400 million in new funding and a $2.5 billion valuation, introduced its Q for Sales solution back in March, which “leverages computer vision, tonal analysis, automatic speech recognition and natural language processing to capture and make recommendations on the full emotional spectrum of sales conversations to boost close rates and performance of sales teams.” Event Intelligent Security Summit On-Demand Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today. Watch Here But computer scientist and famously fired, former Google employee, Timnit Gebru, who founded an independent AI ethics research institute in December 2021, was critical of Uniphore’s claims on Twitter. “The trend of embedding pseudoscience into ‘AI systems’ is such a big one,” she said. This story dug into what this kind of pushback means for the enterprise? How can organizations calculate the risks and rewards of investing in emotion AI? In early May 2022, Eric Horvitz, Microsoft’s chief scientific officer, testified before the U.S. Senate Armed Services Committee Subcommittee on Cybersecurity, he emphasized that organizations are certain to face new challenges as cybersecurity attacks increase in sophistication — including through the use of AI. While AI is improving the ability to detect cybersecurity threats, he explained, threat actors are also upping the ante. “While there is scarce information to date on the active use of AI in cyberattacks, it is widely accepted that AI technologies can be used to scale cyberattacks via various forms of probing and automation…referred to as offensive AI,” he said. However, it’s not just the military that needs to stay ahead of threat actors using AI to scale up their attacks and evade detection. As enterprise companies battle a growing number of major security breaches, they need to prepare for increasingly sophisticated AI-driven cybercrimes, experts say. In June, thousands of artificial intelligence experts and machine learning researchers had their weekends upended when Google engineer Blake Lemoine told the Washington Post that he believed LaMDA, Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient. The Washington Post article pointed out that “Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.” That’s when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich. In June, I spoke to Julian Sanchez, director of emerging technology at John Deere, about John Deere’s status as a leader in AI innovation did not come out of nowhere. In fact, the agricultural machinery company has been planting and growing data seeds for over two decades. Over the past 10-15 years, John Deere has invested heavily on developing a data platform and machine connectivity, as well as GPS-based guidance. “Those three pieces are important to the AI conversation, because implementing real AI solutions is in large part a data game,” he said. “How do you collect the data? How do you transfer the data? How do you train the data? How do you deploy the data?” These days, the company has been enjoying the fruit of its AI labors, with more harvests to come. In July, it was becoming clear that OpenAI’s DALL-E 2 was no AI flash in the pan. When the company expanded beta access to its powerful image-generating AI solution to over one million users via a paid subscription model, it also offered those users full usage rights to commercialize the images they create with DALL-E, including the right to reprint, sell and merchandise. The announcement sent the tech world buzzing, but a variety of questions, one leading to the next, seem to linger beneath the surface. For one thing, what does the commercial use of DALL-E’s AI-powered imagery mean for creative industries and workers – from graphic designers and video creators to PR firms, advertising agencies and marketing teams? Should we imagine the wholesale disappearance of, say, the illustrator? Since then, the debate around the legal ramifications of art and AI has only gotten louder. In summer 2022, the MLops market was still hot when it comes to investors. But for enterprise end users, I addressed the fact that it also seemed like a hot mess. The MLops ecosystem is highly fragmented, with hundreds of vendors competing in a global market that was estimated to be $612 million in 2021 and is projected to reach over $6 billion by 2028. But according to Chirag Dekate, a VP and analyst at Gartner Research, that crowded landscape is leading to confusion among enterprises about how to get started and what MLops vendors to use. “We are seeing end users getting more mature in the kind of operational AI ecosystems they’re building – leveraging Dataops and MLops,” said Dekate. That is, enterprises take their data source requirements, their cloud or infrastructure center of gravity, whether it’s on-premise, in the cloud or hybrid, and then integrate the right set of tools. But it can be hard to pin down the right toolset. In August, I enjoyed getting a look at a possible AI hardware future — one where analog AI hardware – rather than digital – tap fast, low-energy processing to solve machine learning’s rising costs and carbon footprint. That’s what Logan Wright and Tatsuhiro Onodera, research scientists at NTT Research and Cornell University, envision: a future where machine learning (ML) will be performed with novel physical hardware, such as those based on photonics or nanomechanics. These unconventional devices, they say, could be applied in both edge and server settings. Deep neural networks, which are at the heart of today’s AI efforts, hinge on the heavy use of digital processors like GPUs. But for years, there have been concerns about the monetary and environmental cost of machine learning, which increasingly limits the scalability of deep learning models. The New York Times reached out to me in late August to talk about one of the company’s biggest challenges: striking a balance between meeting its latest target of 15 million digital subscribers by 2027 while also getting more people to read articles online. These days, the multimedia giant is digging into that complex cause-and-effect relationship using a causal machine learning model, called the Dynamic Meter, which is all about making its paywall smarter. According to Chris Wiggins, chief data scientist at the New York Times, for the past three or four years the company has worked to understand their user journey and the workings of the paywall. Back in 2011, when the Times began focusing on digital subscriptions, “metered” access was designed so that non-subscribers could read the same fixed number of articles every month before hitting a paywall requiring a subscription. That allowed the company to gain subscribers while also allowing readers to explore a range of offerings before committing to a subscription. I enjoy covering anniversaries — and exploring what has changed and evolved over time. So when I realized that autumn 2022 was the 10 year anniversary of groundbreaking 2012 research on the ImageNet database, I immediately reached out to key AI pioneers and experts about their thoughts looking back on the deep learning ‘revolution’ as well as what this research means today for the future of AI. Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate. Other AI pathbreakers, including Yann LeCun, head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop. But Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote this past March about deep learning “hitting a wall” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.” And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “deep learning bubble,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.” In October, research lab DeepMind made headlines when it unveiled AlphaTensor, the “first artificial intelligence system for discovering novel, efficient and provably correct algorithms.” The Google-owned lab said the research “sheds light” on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices. Ever since the Strassen algorithm was published in 1969, computer science has been on a quest to surpass its speed of multiplying two matrices. While matrix multiplication is one of algebra’s simplest operations, taught in high school math, it is also one of the most fundamental computational tasks and, as it turns out, one of the core mathematical operations in today’s neural networks. This research delves into how AI could be used to improve computer science itself, said Pushmeet Kohli, head of AI for science at DeepMind, at a press briefing. “If we’re able to use AI to find new algorithms for fundamental computational tasks, this has enormous potential because we might be able to go beyond the algorithms that are currently used, which could lead to improved efficiency,” he said. All year I was curious about the use of authorized deepfakes in the enterprise — that is, not the well-publicized negative side of synthetic media, in which a person in an existing image or video is replaced with someone else’s likeness. But there is another side to the deepfake debate, say several vendors that specialize in synthetic media technology. What about authorized deepfakes used for business video production? Most use cases for deepfake videos, they claim, are fully authorized. They may be in enterprise business settings — for employee training, education and ecommerce, for example. Or they may be created by users such as celebrities and company leaders who want to take advantage of synthetic media to “outsource” to a virtual twin. Those working in AI and machine learning may well have thought they would be protected from a wave of big tech layoffs. Even after Meta layoffs in early November 2022, which cut 11,000 employees, CEO Mark Zuckerberg publicly shared a message to Meta employees that signaled, to some, that those working in artificial intelligence (AI) and machine learning (ML) might be spared the brunt of the cuts. However, a Meta research scientist who was laid off tweeted that he and the entire research organization called “Probability,” which focused on applying machine learning across the infrastructure stack, was cut. The team had 50 members, not including managers, the research scientist, Thomas Ahle, said, tweeting: “19 people doing Bayesian Modeling, 9 people doing Ranking and Recommendations, 5 people doing ML Efficiency, 17 people doing AI for Chip Design and Compilers. Plus managers and such.” On November 30, as GPT-4 rumors flew around NeurIPS 2022 in New Orleans (including whispers that details about GPT-4 will be revealed there), OpenAI managed to make plenty of news. The company announced a new model in the GPT-3 family of AI-powered large language models, text-davinci-003, part of what it calls the “GPT-3.5 series,” that reportedly improves on its predecessors by handling more complex instructions and producing higher-quality, longer-form content. Since then, the hype around ChatGPT has grown exponentially — but so has the debate around the hidden dangers of these tools, which even CEO Sam Altman has weighed in on. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings. Intelligent Security Summit On-Demand Did you miss a session at Intelligent Security Summit? Head over to the on-demand library to hear insights from experts and learn the importance of cybersecurity in your organization.