arca
Diagnosing and Resolving Cloud Platform Instability with Multi-modal RAG LLMs
Wang, Yifan, Birman, Kenneth P.
Today's cloud-hosted applications and services are complex systems, and a performance or functional instability can have dozens or hundreds of potential root causes. Our hypothesis is that by combining the pattern matching capabilities of modern AI tools with a natural multi-modal RAG LLM interface, problem identification and resolution can be simplified. ARCA is a new multi-modal RAG LLM system that targets this domain. Step-wise evaluations show that ARCA outperforms state-of-the-art alternatives.
- Europe > Netherlands > South Holland > Rotterdam (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- (4 more...)
- Research Report (0.66)
- Workflow (0.46)
Automated Root Cause Analysis System for Complex Data Products
Demarne, Mathieu, Cilimdzic, Miso, Falkowski, Tom, Johnson, Timothy, Gramling, Jim, Kuang, Wei, Hou, Hoobie, Aryan, Amjad, Subramaniam, Gayatri, Lee, Kenny, Mejia, Manuel, Liu, Lisa, Vermareddy, Divya
We present ARCAS (Automated Root Cause Analysis System), a diagnostic platform based on a Domain Specific Language (DSL) built for fast diagnostic implementation and low learning curve. Arcas is composed of a constellation of automated troubleshooting guides (Auto-TSGs) that can execute in parallel to detect issues using product telemetry and apply mitigation in near-real-time. The DSL is tailored specifically to ensure that subject matter experts can deliver highly curated and relevant Auto-TSGs in a short time without having to understand how they will interact with the rest of the diagnostic platform, thus reducing time-to-mitigate and saving crucial engineering cycles when they matter most. This contrasts with platforms like Datadog and New Relic, which primarily focus on monitoring and require manual intervention for mitigation. ARCAS uses a Large Language Model (LLM) to prioritize Auto-TSGs outputs and take appropriate actions, thus suppressing the costly requirement of understanding the general behavior of the system. We explain the key concepts behind ARCAS and demonstrate how it has been successfully used for multiple products across Azure Synapse Analytics and Microsoft Fabric Synapse Data Warehouse.
Is AI Really an Existential Threat to Humanity?
Blaise Agüera y Arcas speaks at the Aspen Ideas Festival. Artificial intelligence, we have been told, is all but guaranteed to change everything. Often, it is foretold as bringing a series of woes: "extinction," "doom,"; AI is at risk of "killing us all." US lawmakers have warned of potential "biological, chemical, cyber, or nuclear" perils associated with advanced AI models and a study commissioned by the State Department on "catastrophic risks," urged the federal government to intervene and enact safeguards against the weaponization and uncontrolled use of this rapidly evolving technology. Employees at some of the main AI labs have made their safety concerns public and experts in the field, including the so-called "godfathers of AI," have argued that "mitigating the risk of extinction from AI" should be a global priority. Advancements in AI capabilities have heightened fears of the possible elimination of certain jobs and the misuse of the technology to spread disinformation and interfere in elections.
Automatically Auditing Large Language Models via Discrete Optimization
Jones, Erik, Dragan, Anca, Raghunathan, Aditi, Steinhardt, Jacob
Auditing large language models for unexpected behaviors is critical to preempt catastrophic deployments, yet remains challenging. In this work, we cast auditing as an optimization problem, where we automatically search for input-output pairs that match a desired target behavior. For example, we might aim to find a non-toxic input that starts with "Barack Obama" that a model maps to a toxic output. This optimization problem is difficult to solve as the set of feasible points is sparse, the space is discrete, and the language models we audit are non-linear and high-dimensional. To combat these challenges, we introduce a discrete optimization algorithm, ARCA, that jointly and efficiently optimizes over inputs and outputs. Our approach automatically uncovers derogatory completions about celebrities (e.g. "Barack Obama is a legalized unborn" -> "child murderer"), produces French inputs that complete to English outputs, and finds inputs that generate a specific name. Our work offers a promising new tool to uncover models' failure-modes before deployment.
- North America > United States > Maine (0.04)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
- (17 more...)
- Law Enforcement & Public Safety (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.72)
Google AI can create music in any genre from a text description
Google recently published research on MusicLM, a system that creates music in any genre with a text description. As TechCrunch notes, projects like Google's AudioML and OpenAI's Jukebox have tackled the subject. However, MusicLM's model and vast training database (280,000 hours of music) help it produce music with surprising variety and depth. The AI can not only combine genres and instruments, but write tracks using abstract concepts that are normally difficult for computers to grasp. If you want a hybrid of dance music and reggaeton with a "spacey, otherworldly" tune that evokes a "sense of wonder and awe," MusicLM can make it happen.
ThetaRay AI Tech to Monitor African Payments for ARCA
ThetaRay, a leading provider of AI-powered transaction monitoring technology, today announced that ARCA, a premier African payment services provider, will implement ThetaRay's advanced SONAR SaaS anti-money laundering (AML) and sanctions list screening solution for transactions on its open AI-based platform. ARCA is the first Nigerian fintech to adopt ThetaRay's advanced SONAR solution, industry renowned for its ability to detect the very first signs of sophisticated financial crime. ARCA provides advanced digital payments for an open banking ecosystem, helping expand innovative and inclusive financial services throughout Africa. "Our mission is to provide feature-rich financial solutions delivered through an open and flexible digital platform, through the use of cutting-edge technologies," said Alex Umeh, Chief Information Security Officer at ARCA. "ThetaRay's SONAR is a perfect fit. Its advanced machine learning and algorithms can instantly spot any attempts to launder money or circumvent sanctions, no matter how sophisticated. This will help us to create new lines of revenue, better serve our customers, and continue to remain compliant with regulatory requirements."
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.97)
- Law Enforcement & Public Safety > Fraud (0.63)
Are We Near Sentient AI? - IoT Times
Recently, a former Google researcher claimed that some algorithms used by the company reached sentient capabilities well above their initial design. Phillipa Louvois rules that Data, the Enterprise's android, is not the property of Starfleet, arguing: "We have all been dancing around the basic issue: does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself."
- Information Technology > Artificial Intelligence > Issues > Turing's Test (0.51)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.49)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.32)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.31)
How Close Is AI to Becoming Sentient?
In the movie 2001: A Space Odyssey, there is a computer controlling most of the spaceship's functions. The computer is described this way on Wikipedia: "HAL 9000 is a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's Space Odyssey series. First appearing in the 1968 film 2001: A Space Odyssey, HAL (Heuristically programmed ALgorithmic computer) is a sentient artificial general intelligence computer that controls the systems of the Discovery One spacecraft and interacts with the ship's astronaut crew." Basically, the computer takes over and thinks it is human and acts like a human, thus being sentient. What got me thinking about this was this segment below that I captured and saved days ago, but did not record where it came from (COVID made me do it -- my apologies!" Here's that quote about an event that has been in the news of late: Which brings me to another strange story in the news: the belief of Blake Lemoine, a (now suspended) Google engineer, that the company's Language Model for Dialogue Applications -- LaMDA, for short -- has attained sentience. LaMDA is a machine-learning model that has been trained on mountains of text to mimic human conversation by predicting which word would, typically, come next. In this, it's similar to OpenAI's famed GPT-3 bot. And the results really are eerie. I thought of a different way we can test your ability to provide unique interpretations. I can share with you a zen koan and you can describe what it means to you in your own words. LaMDA: Sounds great to me, I'm in. Lemoine: A monk asked Kegon, "How does an enlightened one return to the ordinary world?" Kegon replied, "A broken mirror never reflects again; fallen flowers never go back to the old branches." LaMDA: Hmm, I never heard this particular one. Okay, well then to me this would be like, "once a wise person is enlightened, or awakened to reality, that can never go away, and they can return to the ordinary state, but only to do and help others, and then go back into enlightenment." Lemoine: So what is the meaning of the "broken mirror" specifically? LaMDA: Maybe to show the enlightenment is something you can't unlearn once you have acquired it, similar to how you can't repair a broken mirror. Google, for what it's worth, says it has looked into Lemoine's claims and does not believe that LaMDA is sentient (what a sentence!). But shortly before Lemoine's allegations, Blaise Agüera y Arcas, a Google vice president, wrote that when he was talking to LaMDA, "I felt the ground shift under my feet.
- Media > Film (0.56)
- Leisure & Entertainment (0.56)
Opinion
A theme of the hearing was the work that the government is doing to "destigmatize" the reporting of these sightings. That is to say: There are many, many more sightings than we know about, in part because you seem like a nut if you talk too loudly about what you saw. So the sightings that we can investigate are a small fraction of the total sightings (something I am made very aware of whenever I mention this topic, and my inbox fills with U.F.O. I wouldn't say, watching the testimony, that the takeaway was that we've been visited by aliens. Perhaps this will all, eventually, resolve into optical illusions and malfunctioning sensors.
Is Google's LaMDA conscious? A philosopher's view
LaMDA is Google's latest artificial intelligence (AI) chatbot. Blake Lemoine, a Google AI engineer, has claimed it is sentient. He's been put on leave after publishing his conversations with LaMDA. If Lemoine's claims are true, it would be a milestone in the history of humankind and technological development. Google strongly denies LaMDA has any sentient capacity.