collaborative ai
Latam-GPT: The Free, Open Source, and Collaborative AI of Latin America
Latam-GPT is new large language model being developed in and for Latin America. The project, led by the nonprofit Chilean National Center for Artificial Intelligence (CENIA), aims to help the region achieve technological independence by developing an open source AI model trained on Latin American languages and contexts. "This work cannot be undertaken by just one group or one country in Latin America: It is a challenge that requires everyone's participation," says Álvaro Soto, director of CENIA, in an interview with WIRED en Español. "Latam-GPT is a project that seeks to create an open, free, and, above all, collaborative AI model. We've been working for two years with a very bottom-up process, bringing together citizens from different countries who want to collaborate. Recently, it has also seen some more top-down initiatives, with governments taking an interest and beginning to participate in the project."
- North America > Central America (0.90)
- South America > Colombia (0.08)
- South America > Brazil (0.08)
- (3 more...)
On the Utility of Accounting for Human Beliefs about AI Behavior in Human-AI Collaboration
Yu, Guanghui, Kasumba, Robert, Ho, Chien-Ju, Yeoh, William
To enable effective human-AI collaboration, merely optimizing AI performance while ignoring humans is not sufficient. Recent research has demonstrated that designing AI agents to account for human behavior leads to improved performance in human-AI collaboration. However, a limitation of most existing approaches is their assumption that human behavior is static, irrespective of AI behavior. In reality, humans may adjust their action plans based on their observations of AI behavior. In this paper, we address this limitation by enabling a collaborative AI agent to consider the beliefs of its human partner, i.e., what the human partner thinks the AI agent is doing, and design its action plan to facilitate easier collaboration with its human partner. Specifically, we developed a model of human beliefs that accounts for how humans reason about the behavior of their AI partners. Based on this belief model, we then developed an AI agent that considers both human behavior and human beliefs in devising its strategy for working with humans. Through extensive real-world human-subject experiments, we demonstrated that our belief model more accurately predicts humans' beliefs about AI behavior. Moreover, we showed that our design of AI agents that accounts for human beliefs enhances performance in human-AI collaboration.
- Health & Medicine (1.00)
- Leisure & Entertainment > Games (0.46)
How Mastercard is using AI to address cyber risk
As with just about every industry, AI has increasingly infiltrated the financial sector -- from visual AI tools that monitor customers and workers to automating the Paycheck Protection Program (PPP) application process. Talking at VentureBeat's Transform 2021 event today, Johan Gerber, executive VP for security and cyber innovation at Mastercard, discussed how Mastercard is using AI to better understand and adapt to cyber risk, while keeping people's data safe. On the one hand, consumers have never had it so easy -- making payments is as frictionless as it has ever been. Ride-hail passengers can exit their cab without wasting precious minutes finalizing the transaction with the driver, while home-workers can configure their printer to automatically reorder ink when it runs empty. "As easy as it is for the consumer, the complexity lies in the background -- we have seen the evolution of this hyper connected world in the backend just explode," Gerber said.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.30)
10 Rules for Collaborative Artificial Intelligence Artificial Intelligence
While the exact number that analysts and AI experts assign to it may vary, the "rule of thumb" statistic is that only 1 out of every 10 Artificial Intelligence initiatives ever makes it into production. Having worked in this field for years, alongside countless clients trying to make AI a reality, it's not a statistic that is difficult to accept. The specific reasons for this difficulty are vast and nuanced, but, in my opinion, can be captured in three big buckets: translating the academic nature of data science to business, having the right data, and collaborating (or lack of collaboration) on building a solution that provides value. Expanding on all of the above is another piece altogether. Instead, I'd like to share some insight into what's possible, specifically related to the third bucket above: building collaborative AI, as a collaborative framework, really is the key to mitigating the risks of bias, distrust, and concept drift in AI deployments.
What is Collaborative AI?
How Collaborative AI, otherwise known as a Hybrid Workforce, will impact the modern workforce. Collaborative AI is a new model of work that enables employees to perform their job functions faster and with more insight as a result of teamwork with AI systems. Otherwise known as a Digital Workforce or a Hybrid Workforce, Collaborative AI frees humans from mundane, repetitive tasks in order to focus on more high-value or unique tasks. "In our research involving 1,500 companies, we found that firms achieve the most significant performance improvements when humans and machines work together," according to a Harvard Business Review article on Collaborative AI. "Through such collaborative intelligence, humans and AI actively enhance each other's complementary strengths: the leadership, teamwork, creativity, and social skills of the former, and the speed, scalability, and quantitative capabilities of the latter. What comes naturally to people (making a joke, for example) can be tricky for machines, and what's straightforward for machines (analyzing gigabytes of data) remains virtually impossible for humans. Business requires both kinds of capabilities."
Towards Risk Modeling for Collaborative AI
Camilli, Matteo, Felderer, Michael, Giusti, Andrea, Matt, Dominik T., Perini, Anna, Russo, Barbara, Susi, Angelo
Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4.0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
Look out: Here comes the next wave of smarter, nimbler and more collaborative AI - Digirupt IO
When many think about the progress of AI and its impact on work, they envision a world where the robots and software thinking machines do all of the work and there's little room left for the work humans used to do. It's certainly not the future for the AI and human workforce that the Defense Advanced Research Projects Agency (DARPA) sees. DARPA is the agency that helped usher in the Internet, as well as the original expert systems of the 1960s through 1980s, as well as the big data analysis and machine learning systems that lay the foundation for natural language processing, self-driving cars, personal assistant bots. Now DARPA is leading the efforts to make AI and humans even more collaborative co-workers. AI has proven some of its value in the form of very targeted and specialized systems.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Disney experiments look to make kid-robot interactions more natural
Sooner or later, our children will be raised by robots, so it's natural that Disney, purveyor of both robots and child-related goods, would want to get ahead of that trend. The three studies were executed at once as a whole, with each part documented separately in papers posted today. The kids in the study (about 80 of them) proceeded through a series of short activities generally associated with storytelling and spoken interaction, their progress carefully recorded by the experimenters. First they were introduced (individually as they took part in the experiment, naturally) to a robot named Piper, which was controlled remotely ("wizarded") by a puppeteer in another room, but had a set of recorded responses it drew from for different experimental conditions. The idea is that the robot should use what it knows to inform what it says and how it says it, but it's not clear quite how that should work, especially with kids.
Microsoft competition asks PhD students to create advanced AI to play Minecraft - TechRepublic
AI has achieved milestones in mastering games like chess, Go, and, recently, poker--illustrating how successful machines have become at completing specific, narrow tasks. But can AI move beyond the narrow, toward achieving more general, human-like skills? On Thursday, Microsoft launched a competition to address this question. Microsoft's Project Malmo, which the company calls a "sophisticated AI experimentation platform," brings researchers together to use Minecraft as a testing tool for developing AI--smart, collaborative AI that can compete in a virtual world. The Malmo Collaborative AI Challenge asks PhD students to enter this world and create AI that can team up with randomly assigned players to compete for a high score in Minecraft.
- Europe > Sweden > Skåne County > Malmö (0.49)
- Oceania > Australia > New South Wales (0.06)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.06)