Goto

Collaborating Authors

Community colleges can become America's AI incubators

#artificialintelligence

Millions of students attend community colleges every year, with almost 1,300 schools located in every corner of the United States. With their large student bodies, community colleges are a massive source of potential for expanding the artificial intelligence (AI) workforce, but employers and policymakers alike sorely underestimate their potential. If the United States aims to maintain its global lead and competitive advantage in AI, it must recognize that community colleges hold a special spot in our education system and are too important to be overlooked any longer. As detailed in a recent study I co-authored as part of Georgetown University's Center for Security and Emerging Technology (CSET), community colleges have the potential to support the country in its mission for superiority in AI. Community colleges could create pathways to good-paying jobs across the United States and become tools for training a new generation of AI-literate workers.


Precision, Accuracy, Scale – And Experience – All Matter With AI

#artificialintelligence

When it comes to building any platform, the hardware is the easiest part and, for many of us, the fun part. But more than anything else, particularly at the beginning of any data processing revolution, it is experience that matters most. Whether to gain it or buy it. With AI being such a hot commodity, many companies that want to figure out how to weave machine learning into their applications are going to have to buy their experience first and cultivate expertise later. This realization is what caused Christopher Ré, an associate professor of computer science at Stanford University and a member of its Stanford AI Lab, Kunle Olukotun, a professor of electrical engineer at Stanford, and Rodrigo Liang, a chip designer who worked at Hewlett-Packard, Sun Microsystems, and Oracle, to co-found SambaNova Systems, one of a handful of AI startups trying to sell complete platforms to customers looking to add AI to their application mix. The company has raised an enormous $1.1 billion in four rounds of venture funding since its founding in 2017, and counts Google Ventures, Intel Capital, BlackRock, Walden International, SoftBank, and others as backers as it attempts to commercialize its DataScale platform and, more importantly, its Dataflow subscription service, which rolls it all up and puts a monthly fee on the stack and the expertise to help use it. SambaNova's customers have been pretty quiet, but Lawrence Livermore National Laboratory and Argonne National Laboratory have installed DataScale platforms and are figuring out how to integrate its AI capabilities into the simulation and modeling applications. Timothy Prickett Morgan: I know we have talked many times before during the rise of the "Niagara" T series of many-threaded Sparc processors, and I had to remind myself of that because I am a dataflow engine, not a storage device, after writing so many stories over more than three decades. I thought it was time to have a chat about what SambaNova is seeing out there in the market, but I didn't immediately make the connection that it was you.


Study Says AI Improves Sensitivity of Fracture Detection by 20 Percent

#artificialintelligence

Researchers have noted that traumatic fractures are among the most commonly missed diagnoses.1,2 However, a new study suggests that artificial intelligence (AI) may have significant benefit in improving the assessment of fractures.3 In the study of 500 patients (268 men and 232 women), researchers compared unassisted assessment of acute fractures versus assessment with the assistance of an FDA-cleared algorithm (Boneview, Gleamer) and stand-alone use of AI. The authors found that AI assisted assessment had a 20 percent higher sensitivity (86 percent) of diagnosing fractures on radiographs in comparison to unassisted assessment (66 percent). The use of AI assistance led to a lower number of false negatives (26) in comparison to unassisted radiograph assessment (64), according to the study.


25 AI Insurance Companies You Should Know

#artificialintelligence

The insurance industry has always dealt in data, but it hasn't always been able to put that data to optimal use. With the rise of artificial intelligence, which analyzes and learns from massive sets of digital information culled from public and private sources, insurers are embracing the technology's many facets -- from machine learning and natural language processing to robotic process automation and audio/video analysis -- to provide better products. Customers, too, are benefitting from practices like comparative shopping, quick claims processing, around-the-clock service and improved decision management. To get a better sense of how AI impacts the insurance industry, check out these 25 AI insurance applications. Liberty Mutual explores AI through its initiative Solaria Labs, which experiments in areas like computer vision and natural language processing. Auto Damage Estimator is one result of these efforts.


5 ways to avoid artificial intelligence bias with 'responsible AI'

#artificialintelligence

Over the last few years, responsible AI has gone from a niche concept to a constant headline. Responsible, trustworthy AI is the subject of several documentaries, books, and conferences. The more we make responsible AI an expectation and a known commodity, the more likely we are to make it our reality. This enables us to flourish with more accessible AI. This is our shared goal through the Responsible AI Badge certification programme for senior executives.


ELEXIS from Α to Ω: Outcomes, Sustainability & Afterlife of a new European Lexicographic Infrastructure, Firenze 2022

VideoLectures.NET

The ELEXIS showcase event invites representatives of institutions that have become observers, as well as people from the industry, operating in fields such as Language Technology, Machine Translation, language learning, Dictionary Publishing, etc.



3 Different Approaches for Train/Test Splitting of a Pandas Dataframe

#artificialintelligence

Usually, the Train/Test Splitting process is one of the Machine Learning tasks taken for granted. In fact, data scientists focus more on Data Preprocessing or Feature Engineering, delegating the process of dividing the dataset into a line of code. In this tutorial, I assume that the whole dataset is available as a CSV file, which is loaded as a Pandas Dataframe. Scikit-learn provides a function, named train_test_split(), which automatically splits a dataset into a training and test set. As input parameters of the function either lists or Pandas Dataframes can be passed.


If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People

#artificialintelligence

Marketing won’t deliver on AI’s promise unless the human side of the equation is given equal attention.


A neural network picks promising antibiotics from a library of chemicals

#artificialintelligence

Biochemists have had some success designing drugs to meet specific goals. But much of drug development remains a tedious grind, screening hundreds to thousands of chemicals for a "hit" that has the effect you're looking for. There have been several attempts to perform this grind in silico, using computers to analyze chemicals, but they had mixed results. Now, a US-Canadian team reports that it modified a neural network to deal with chemistry and used it to identify a potential new antibiotic. Two factors greatly influence the success of neural networks: the structure of the network itself and the training it undergoes.