If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. Merve Hickok is the Senior Research Director of the Center for AI and Digital Policy, and the Founder of AIethicist.org. She is a social researcher, consultant and trainer on AI ethics & policy. Her work on AI is focused on bias, social justice, DE&I, public interest and participatory development and governance. She aims to create awareness, build capacity, and advocate for ethical and responsible development & use of AI.
We've all heard about the future of artificial intelligence and how powerful it can be. But, as designers, do we understand how powerful it is and how it relates to us? In short, it can reduce manual work, create multiple variations at once, and provide a personalized and humanized experience as well as data-driven design direction. AI design is the application of artificial intelligence to the creation of new designs for businesses or individuals. AI is rapidly evolving these days, and businesses have begun to employ it in a variety of fields.
Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This first Deepdive episode we talk to Amandine Flachs, Tommy Thompson and Richard Bartle about AI in games, its history, its uses and where it's going. After supporting startups founders for more than 10 years, she is now looking to help game developers create smarter and more human-like game AIs using machine learning. Amandine is still involved in the startup ecosystem as a mentor, venture scout and through her series of live AMAs with early-stage entrepreneurs. She can be found on Twitter @AmandineFlachs.
When it comes to designing user experiences with our systems, the less, the better. We're overwhelmed, to put it mildly, with demands and stimuli. There are millions of apps, applications and websites begging for our attention, and once we have a particular app, application and website up, we still are bombarded by links and choices. Artificial intelligence is offering relief on this front. User experience, driven by AI, may help winnow down a firehose of choices and information needed at the moment down to a gently flowing fountain.
It's well known that smart audio speakers recognize voices and can distinguish among different voices, so it wasn't a big surprise when it became possible to control smart kitchen appliances with voice. What is a really big surprise is that computer vision – also known as image recognition – can now control appliances. I had a conversation with Shawn Stover, executive director of Smart Home Solu- tions at GE Appliances (GEA) recently about the company's partnership with Google Cloud and how that changes things for the kitchen industry. Stover described customer experiences that would be more like those we experience with our smartphones: seamless and intuitive. For example, the process of roasting a chicken typically starts by looking up a recipe, setting the oven to bake at a certain temperature, putting the chicken in the oven and then taking it out when the timer goes off.
Nowadays, it's more straightforward than at any other time to make a piece of graphic design, and this is particularly valid for logo design. These little graphic works will quite often be straightforward, with a moderate couple of components and restricted shadings. They effectively identify a brand and have not many prerequisites. Be that as it may, the design should be critical, extraordinary, and do the occupation for which they were designed. Hypothetically, logo design is entirely simple, to the point that anybody can deal with a DIY logo.
For the new sequel, "The Matrix Resurrections," filmmakers deployed much-higher-caliber technologies, including three-dimensional imagery made using artificial intelligence. But after 22 years of digital evolution, high-end movie effects are approaching a plateau near perfection. "We went from pulling off what seemed to be impossible, to a sort of inability to create surprise" in the movie industry, says John Gaeta, who helped craft the bullet-time effect. He was a visual-effects designer on the first three "Matrix" films; now he is making things for the metaverse. This year the movies presented us with a car slingshotting from cliff to cliff ("F9"); Ryan Reynolds running amok inside a videogame ("Free Guy"); and giant monsters crushing the Hong Kong skyline ("Godzilla vs. Kong").
This approach is detailed in a paper presented at MICRO-54: the 54th IEEE/ACM International Symposium on MicroArchitecture.Micro-54 is one of the top conferences in the field of computer architecture and was selected as the conference's best publication. "This is a problem that needs to be studied in-depth and has traditionally relied on additional circuits to solve it," said Zhiyao Xie, lead author of the paper and a doctoral candidate in the lab of Yiran Chen, a professor of electrical and computer engineering at Duke."But our approach runs directly on microprocessors in the background, which opens up a lot of new opportunities. I think that's why people are excited about it." In modern computer processors, the computation cycle is 3 trillion times per second. Tracking the energy consumed for such a fast conversion is important to maintaining the performance and efficiency of the entire chip.
Procedural stories in video games often induce a specific kind of delight. You'll know when it hits -- a realization that the code and algorithms of the game seem to be generating a coherent narrative from your own impulsive, seemingly chaotic actions. It's what 2020's viral sensation Blaseball and this year's breakout indie hit, Wildermyth, share in common -- two strikingly different games whose reactive stories are nevertheless cut from the very same cloth. Players have grown accustomed to procedural generation in a spatial sense. Just look at the endless variations of levels that define games such as Hades in the ever-popular rogue-like genre and the infinite planets that populate the virtual universe of 2016's No Man's Sky.
The approach is detailed in a paper published at MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, one of the top-tier conferences in computer architecture, where it was selected the conference's best publication. "This is an intensively studied problem that has traditionally relied on extra circuitry to address," said Zhiyao Xie, first author of the paper and a PhD candidate in the laboratory of Yiran Chen, professor of electrical and computer engineering at Duke. "But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that's why people are excited about it." In modern computer processors, cycles of computations are made on the order of 3 trillion times per second. Keeping track of the power consumed by such intensely fast transitions is important to maintain the entire chip's performance and efficiency.