If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Welcome to the InfoQ podcast Annual Trends Report in AI, ML and data engineering topics. I am joined today by the InfoQ editorial team, and also an external panelist. There have been a lot of innovations and developments happening in AI and ML space. Before we jump into the main part of this podcast, let's start with the introductions of our panelists. Rags, can you please introduce yourself? Rags Srinivas: Glad to be here. I was here for the previous podcast last year as well. So, things have changed quite a bit, but I focus really mainly on the big data infrastructure and the confluence of that. So quite a few developments happening there that I'd love to talk about when we get there. Myself, I work for DataStax as a developer advocate, and essentially, again, it's all about data, AI, infrastructure and how to manage your costs and how to do it efficiently. And hopefully, we'll cover all that. I'm Roland, I'm a machine learning engineer, and I hope to talk a lot about transformer models and large-scale foundational models. For InfoQ, I like to write about some of the latest innovations in deep learning, and definitely want to talk about NLP and some of the multi-modal text and image models. Srini Penchikala: Next is Daniel Dominguez. Thank you for the invitation. I like to write about the metaverse, new technologies, deep learning.
Research-led Cork start-up CergenX is putting a vast databank of baby brainwaves to good work using the AI that underpins tech like Siri and Alexa. According to Jason Mowles, around five in every 1,000 newborn babies have some form of brain abnormality at birth, and many of these go undetected. "It is simply not possible to test all newborns," said Mowles. Research indicates that early detection of brain injury would improve long-term outcomes, as the sooner treatments or interventions are introduced, the better. And research is at the core of Mowles' start-up, CergenX, which sets out to make testing of all newborns not only possible, but effective at evaluating brain health as this early stage of life. Driving the research behind the start-up is co-founder Geraldine Boylan, professor of neonatal physiology at University College Cork (UCC) and co-founder and director of the Infant research centre.
Science and technology are developing at a very fast pace and many new techniques are incorporated to make several tasks easier. There are several newer techniques used for educating the masses and one of them is Artificial Intelligence (AI). Artificial Intelligence enables machines and computers to mimic the capabilities of the human brain in decision-making and problem-solving. There are other similar terms used that include machine learning and deep learning. These terms are frequently used interchangeably, but they are not the same.
Artificial Intelligence (AI) is one of the most transformative technologies of our lifetime. Movies often portray AI as something hostile or insidious, a robot villain or sentient general intelligence that turns against its human creators. But the reality is very different. Today's AI is used to power much more focused solutions that enrich and improve our lives. While every new technology introduces risks and the potential for misuse, the positive effects AI will have on our lives will far outweigh any harmful effects.
The Statsbot team has invited Peter Mills to tell you about data structures for machine learning approaches. So you've decided to move beyond canned algorithms and start to code your own machine learning methods. Maybe you've got an idea for a cool new way of clustering data, or maybe you are frustrated by the limitations in your favorite statistical classification package. In either case, the better your knowledge of data structures and algorithms, the easier time you'll have when it comes time to code up. I don't think the data structures used in machine learning are significantly different than those used in other areas of software development.
You don't need to fire up DALL-E if you want AI to create images from text -- you just need a popular social media app. The Verge notes TikTok has introduced a rudimentary "AI greenscreen" effect in its Android and iOS apps that turns your text descriptions into artwork. It's much simpler than OpenAI's DALL-E 2, producing abstract blobs rather than photorealistic depictions, but it might do the trick if you want an original background for your latest video. As The Verge explains, though, there may be strong reasons to limit the AI generator's capabilities. Even if the required computational power isn't a problem, the potential output might be.
I am not a data scientist. And while I know my way around a Jupyter notebook and have written a good amount of Python code, I do not profess to be anything close to a machine learning expert. So when I performed the first part of our no-code/low-code machine learning experiment and got better than a 90 percent accuracy rate on a model, I suspected I had done something wrong. If you haven't been following along thus far, here's a quick review before I direct you back to the first two articles in this series. To see how much machine learning tools for the rest of us had advanced--and to redeem myself for the unwinnable task I had been assigned with machine learning last year--I took a well-worn heart attack data set from an archive at the University of California-Irvine and tried to outperform data science students' results using the "easy button" of Amazon Web Services' low-code and no-code tools.
We are a next-gen cybernetics start-up backed by a few top-tier investors (led by NEA). We aim to push the boundaries of what intelligent systems are capable of achieving both autonomously and in collaboration with humans. Before starting Neo Cybernetica, our CEO founded the unicorn AI company DataRobot and led for almost a decade while working directly with worldwide customers across many industries. You can expect to be part of something exciting at the contour of human knowledge. We are looking for an Embedded Systems Engineer to join our fast-growing team of highly skilled professionals and work on breakthrough robotics technology.
Electronic brain implants could allow lawyers to quickly scan years of background material and cut costs in the future, a new report claims. The report from The Law Society sets out the way the profession could change for employees and clients as a result of advances in neurotechnology. It suggests that a lawyer with the chip implanted in his or her brain could potentially scan documentation in a fraction of the time, reducing the need for large teams of legal researchers. 'Some lawyers might try to gain an advantage over competitors and try to stay ahead of increasingly capable AI systems by using neurotechnology to improve their workplace performance,' wrote Dr Allan McCay, the author of the report. Neurotechnology could also allow firms to charge clients for legal services based on'billable units of attention' rather than billable hours, as they would be able to monitor their employees' concentration.
Mobile devices use facial recognition technology to help users quickly and securely unlock their phones, make a financial transaction or access medical records. But facial recognition technologies that employ a specific user-detection method are highly vulnerable to deepfake-based attacks that could lead to significant security concerns for users and applications, according to new research involving the Penn State College of Information Sciences and Technology. The researchers found that most application programming interfaces that use facial liveness verification--a feature of facial recognition technology that uses computer vision to confirm the presence of a live user--don't always detect digitally altered photos or videos of individuals made to look like a live version of someone else, also known as deepfakes. Applications that do use these detection measures are also significantly less effective at identifying deepfakes than what the app provider has claimed. "In recent years we have observed significant development of facial authentication and verification technologies, which have been deployed in many security-critical applications," said Ting Wang, associate professor of information sciences and technology and one principal investigator on the project.