Goto

Collaborating Authors

 real concern


UK has real concerns about AI risks, says competition regulator

The Guardian

Just six major technology companies are at the heart of the AI sector through an "interconnected web" of more than 90 investments and partnerships links, the UK's competition regulator has warned, sparking increased concern about the anti-competitive nature of the technology. Sarah Cardell, the chief executive of the Competition and Markets Authority, said AI foundation models – general-purpose AI systems such as OpenAI's GPT-4 and Google's Gemini, on which consumer and business products are frequently built – were a potential "paradigm shift" for society. Speaking in Washington, she added that the immense concentration of power they represented would give a small number of companies "the ability and incentives to shape these markets in their own interests". "When we started this work, we were curious. Now, with a deeper understanding and having watched developments very closely, we have real concerns," Cardell said.


Getty bans AI-generated art due to copyright concerns

#artificialintelligence

Text-to-image tools, such as DALL-E, Midjourney, Craiyon, and Stable Diffusion, have opened the floodgates for machine-made artwork. Anyone can either pay a small fee or use a free model to create images from text descriptions. All you have to do is tell, in writing, the AI system what kind of scene you want it to make, and the software will generate it for you. The quality of these images has got so good they are now being used by professionals to make magazine front covers, adverts, win art competitions, and so on. You can see them as interesting tools to generate pictures, or as the end of art as we know it.


Lost In The Covid-19 Shuffle: 5 Key Areas That Need AI Help

#artificialintelligence

When it comes to Covid-19, one big question is how are we using artificial intelligence (AI) to help? While there is much focus on contact tracing and virus research, there are other big problems resulting from the coronavirus. Going beyond this, there are five key ares for people to focus AI on.


Google CEO Sundar Pichai: This is why AI must be regulated ZDNet

#artificialintelligence

Google CEO Sundar Pichai has explained why the world's governments need to impose regulations on the use of artificial intelligence (AI) beyond principles published by a company. Pichai outlined his thoughts on AI regulation in the Financial Times today, reflecting on Google's own AI principles, which it published in mid-2018 following an outcry from employees over its work on the Pentagon's Project Maven. The project applied Google-developed object recognition AI to drone surveillance technology. Google vowed in its AI principles not to create AI that would harm people, but Pichai noted that "principles that remain on paper are meaningless" without action, pointing to the tools Google has developed and open-sourced to test AI for "fairness". But he also admits that with every major innovation comes potential negative side effects.


AI Bias a Real Concern in Business

#artificialintelligence

This number jibes with another finding from the DataRobot survey: that 38% of the organizations surveyed reported they use "black box" machine learning systems that offer no insight into how it makes decisions. The juxtaposition of AI bias concerns and black box systems is enough to warrant serious questions about the direction compaines should take with their machine learning, according to John Giannandrea, Apple's senior vice president of machine learning and AI strategy. "If someone is trying to sell you a black box system… and you don't know how it works or what data was used to train it, then I wouldn't trust it," DataRobot quotes Giannandrea as saying in its report. The survey indicates that organizations are aware of the potential pitfalls and are actively working to mitigate it. DataRobot found that 64% of survey respondents say they're "very to extremely" confident in their ability to identify AI bias.


Should We Fear Artificial Superintelligence?

#artificialintelligence

Speaking at a conference in Lisbon, Portugal shortly before his death, Stephen Hawking told attendees that the development of artificial intelligence might become the "worst event in the history of our civilization," and he had every reason for concern. Known as an artificial superintelligence (ASI) by AI researchers, ethicists, and others, it has the potential to become more powerful than anything this planet has ever seen and it poses what will likely be the final existential challenge humanity will ever face as a species. To better understand what concerned Stephen Hawking, Elon Musk, and many others, we need to deconstruct many of the popular culture depictions of AI. The reality is that AI has been with us for a while now, ever since computers were able to make decisions based on inputs and conditions. When we see a threatening AI system in the movies, it's the malevolence of the system, coupled with the power of a computer, that scares us.


Artificial Intelligence, Real Concerns: Hype, Hope and the Hard Truth About AI

#artificialintelligence

Artificial intelligence (AI) is generating both interest and investment from companies hoping to leverage the power of autonomous, self-learning solutions. The Pentagon recently earmarked $2 billion in funding to help the Defense Advanced Research Projects Agency (DARPA) push AI forward, and artificially intelligent solutions are dominating industry subsets such as medical imaging, where AI companies raised a combined $130 million worth of investments from March 2017 through June 2018. Information security deployments are also on the rise as IT teams leverage AI to defeat evolving attack methods, and recent data suggests that AI implementation could both boost gross domestic product (GDP) and generate new jobs. It's easy to see AI as a quick fix for everything from stagnating revenues to medical advancement to network protection. According to a recent survey from ESET, however, new increasing business expectations and misleading marketing terminology have generated significant hype around AI, to the point where 75 percent of IT decision-makers now see AI as the silver bullet for their security issues.


Artificial intelligence taking away jobs a real concern: Shashi Tharoor

#artificialintelligence

Artificial intelligence is taking away jobs in the field of healthcare and information technology and it is a cause of concern, Congress MP Shashi Tharoor said. "Artificial intelligence is also making inroads into jobs like medical transcription. A World Bank report points out that 69% of Indian jobs would be taken away by robots," he said, speaking at the third day of Jain International Trade Organisation (JITO) conclave in Chennai. Mr. Tharoor spoke on the theme of youth empowerment and also inaugurated JITO's first Youth Conclave. "In the U.S., they are talking about driverless cars. What will happen to 25 million drivers in India?" he asked.


Is AI evil? No, and that question distracts us from the real concerns, says AI2's Oren Etzioni

#artificialintelligence

At times, modern artificial intelligence still feels like science fiction. A few decades ago, the kind of AI programs of today would have seemed almost outrageous -- self-driving cars, systems that have mastered the most challenging game in the world, and even programs that could alert doctors to medical errors before they happen. Despite the incredible progress and potential, public opinion of AI remains rooted in science fiction -- evil entities, out to destroy mankind. The area gets a bad rap in the press, in Hollywood, and even from tech and science leaders like Stephen Hawking and Elon Musk. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence (AI2) and longtime AI researcher, says this depiction of an "evil AI" is far off the mark from the reality of today's tech.


First White House AI workshop focuses on how machines (plus humans) will change government

#artificialintelligence

Intelligent machines won't be ruling the world anytime soon – but what happens when they turn you down for a loan, crash your car or discriminate against you because of your race or gender? On one level, the answer is simple: "It depends," says Bryant Walker Smith, a law professor at the University of South Carolina who specializes in the issues raised by autonomous vehicles. But that opens the door to a far more complex legal debate. "It seems to me that'My Robot Did It' is not an excuse," says Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, or AI2. The rapidly rising challenges that face America's legal system and policymakers were the focus of today's first-ever White House public workshop on artificial intelligence, presented at the University of Washington School of Law. For a full afternoon, Smith, Etzioni and other experts debated the options in an auditorium that was filled to capacity.