Goto

Collaborating Authors

Results


The Future Of AI: Careers In Machine Learning - AI Summary

#artificialintelligence

Machine learning is a branch of data science which involves using "data science programs that can adapt based on experience," said Ben Tasker, technical program facilitator of data science and data analytics at Southern New Hampshire University. As the fields of science and engineering continue to advance, artificial intelligence is becoming "a lot less artificial and a lot more intelligent," Tasker said. Because so much about the field of data science in general and AI in particular is new, there are many opportunities to "make your own niche, especially now that many companies have started to invest in the idea of artificial intelligence," Tasker said. AI Engineer: In this role, one may be involved in the different facets of designing, developing and building artificial intelligence models using machine learning algorithms. Big Data Engineer: Overlapping with the role of a data scientist, the person in this role analyzes a company's volume of data known as "big data," and then uses the analyses to mine useful information in support of the company and its business model.


50 Examples of Machine Learning & AI in Data Analysis

#artificialintelligence

Analytics has been changing the bottom line for businesses for quite some time. Now that more companies are mastering their use of analytics, they are delving deeper into their data to increase efficiency, gain a greater competitive advantage, and boost their bottom lines even more. That's why companies are looking to implement machine learning (ML) and artificial intelligence (AI); they want a more comprehensive analytics strategy to achieve these business goals. Learning how to incorporate modern machine learning techniques into their data infrastructure is the first step. For this many are looking to companies that already have begun the implementation process successfully. For call centers, using ML and AI means having conversation analytics software in place – in fact, decades ago call centers began using primitive forms of artificial intelligence.


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum

#artificialintelligence

DARPA, In-Q-Tel, US National Laboratories (examples: Argonne, Oak Ridge) are famous government funding agencies for deep tech on the forward boundaries, the near impossible, that have globally transformative solutions. The Internet is a prime example where more than 70% of the 7.8 billion population are online in 2022, closing in on 7 hours daily mobile usage, and global wealth of $500 Trillion is powered by the Internet. There is convergence between the early bets led by government funding agencies and the largest corporations and their investments. An example is from 2015, where I was invited to help the top 100 CEOs, representing nearly $100 Trillion in assets under management, to look ten years into the future for their investments. The resulting working groups, and private summits resulted in the member companies investing in all the areas identified: quantum computing, block chain, cybersecurity, big data, privacy and data, AI/ML, future in fintech, financial inclusion, ...


US Companies Must Deal with EU AI law, Like It or Not

#artificialintelligence

Don't look now, but using Google Analytics to track your website's audience might be illegal. That's the view of a court in Austria, which in January found that Google's data product was in breach of the European Union's General Data Protection Regulation (GDPR) as it was not doing enough to make sure data transferred from the EU to the company's servers in the US was protected (from, say, US intelligence agencies). Well for those working in AI and biotech, it matters, especially to those working outside of Europe with a view to expansion there. For a start, this is a major precedent that threatens to upend the way many tech companies work, since the tech sector relies heavily on the safe use and transfer of large quantities of data. Whether you use Google Analytics is neither here nor there; the case has shown that Privacy Shield -- the EU-US framework that governs the transfer of personal information in compliance with GDPR -- may not be compliant with European law after all.


Big Data Industry Predictions for 2022 - insideBIGDATA

#artificialintelligence

As a result, all major cloud providers are either offering or promising to offer Kubernetes options that run on-premises and in multiple clouds. While Kubernetes is making the cloud more open, cloud providers are trying to become "stickier" with more vertical integration. From database-as-a-service (DBaaS) to AI/ML services, the cloud providers are offering options that make it easier and faster to code. Organizations should not take a "one size fits all" approach to the cloud. For applications and environments that can scale quickly, Kubernetes may be the right option. For stable applications, leveraging DBaaS and built-in AI/ML could be the perfect solution. For infrastructure services, SaaS offerings may be the optimal approach. The number of options will increase, so create basic business guidelines for your teams.


A reading list for uncertain times

Science

From an incisive ethnography of predictive policing to a compelling indictment of technology-enabled learning tools, the books on this year's fall reading list offer valuable context to the myriad challenges currently facing humanity. Dive deep into a public health disaster shrouded in secrecy, sit with the uncomfortable questions raised by a fictional foray into the future of intimacy, confront the challenges to sustainable development posed by environmental racism, and learn what a QR-coded chicken in rural China portends about the future of agriculture. When you are through, sit back and marvel at the odds stacked against humanity from the start with an entertaining romp through evolution and then leave your earthly worries behind with an ambitious tour of the Solar System. —Valerie Thompson Reviewed by Ivor Knight 1 Through a series of chance events, the pathogen we now know as severe acute respiratory syndrome coronavirus 2 emerged in 2019 and infected millions of humans within a span of 6 months. But chance has driven more than just the planet's latest pandemic. In his new book, A Series of Fortunate Events: Chance and the Making of the Planet, Life, and You , Sean B. Carroll takes readers on an entertaining tour of biological discovery that emphasizes the dominant role played by chance in shaping the conditions for life on Earth. Along the way, he provides insights and humor that make the book a quick, lively read that both educates and entertains. Carroll begins with one of the most consequential chance events to have occurred in the history of our planet: the Cretaceous-Paleogene asteroid impact on the Yucatán Peninsula that resulted in the extinction of the dinosaurs and expansion of mammals. Given Earth's rotational speed, if the asteroid had hit 30 minutes earlier or later, scientists believe it would have made a much less consequential impact, landing in either the Atlantic or Pacific Ocean. If that had happened, there might still be dinosaurs today, but no humans. As he does throughout the book, Carroll compares the example from science with an example from popular culture, describing the comedian Seth MacFarlane's good fortune to have narrowly missed (by 30 minutes) one of the flights that was hijacked on 11 September 2001. Fundamental topics such as the roles that mutation and natural selection play in the evolution of diverse life-forms, the genetics of human reproduction, cellular mechanisms of acquired immunity, and the development of cancer are all treated within a framework where chance dominates. Carroll explains in detail how chance creates the genetic diversity upon which natural selection acts and results in the richness of species on Earth, as well as how random combinations among just 163 gene segments make possible a human immune system that can produce up to 10 billion different antibodies. Readers will likely be particularly interested to learn that their genome is only one of the 70 trillion possibilities that could have been produced by their parents. Written in a conversational style, the book reads like an updated version of Jacques Monod's 1970 Chance and Necessity that speaks directly to the reader, making complex subject matter more accessible. There is also a suggested reading list and an extensive bibliography included for further exploration. Carroll's central argument, that we are all here by luck, is certainly clear and compelling. What we choose to do with that luck, however, is where things really get interesting. Books such as this remind us to make our unlikely time here count. Reviewed by Gillian Bowser 2 Does a hurricane discriminate between the wealthy and the poor? Do earthquakes target specific victims? How does systemic racism influence development goals? In academic explorations of sustainable development and environmental responsibilities, our assumptions about the relationship between income and energy consumption remain largely rooted in the idea that social inequalities decrease as countries develop, thus reducing environmental inequality. No such relationship appears to actually exist. In his sobering but essential new book, Unsustainable Inequalities , economist Lucas Chancel explores the intersections of social justice and environmental sustainability with a focus on global goals established at the 2012 United Nations Conference on Sustainable Development, which informed the underlying philosophy of the 2015 Paris Agreement of the United Nations Framework Convention on Climate Change (UNFCCC) ([ 1 ][1]). Framing his narrative through the lens of intragenerational economic inequalities, he identifies social inequality as a core driver of environmental unsustainability that leads to a vicious circle wherein the rich consume more and the poor lose access to environmental resources and become increasingly vulnerable to environmental shocks. In 1987, the World Commission on Environment and Development issued a report called “Our Common Future” that defined sustainable development as “development that meets the need of the present without compromising the ability of future generations to meet their own needs” ([ 2 ][2]). The idea of intergenerational environmental equity became a cornerstone concept, shifting climate policy toward the common but differentiated responsibilities enshrined in the UNFCCC. Yet questions about intergenerational responsibility and the equitable impacts of climate change and environmental degradation remain. Environmental racism, wherein communities of color are disproportionately exposed to environmental risks, is inseparable from social justice, Chancel argues, and the attainment of sustainable development that also protects the environment across generations is “extremely difficult” without first addressing economic inequality within a single generation. The notion that we may be able to attain sustainable development and achieve equal responsibility for environmental degradation feels more unreachable than ever in a world upended by a global pandemic. In prepandemic times, many nations had already failed to implement or participate in local and global environmental justice efforts, and taxation schemes to level responsibilities for environmental pollution have proven wildly unpopular. And while Chancel argues that common indicator frameworks such as the United Nations' Sustainable Development Goals encourage nations to learn from one another, the continued rise of social inequality is a stark reminder of the difficult road ahead. Reviewed by Kanwal Singh 3 As the pandemic forces so many school systems and learning institutions to move online, the desire to educate students well using online tools and platforms is more pressing than ever. But as Justin Reich illustrates in his new book, Failure to Disrupt , there are no easy solutions or one-size-fits-all tools that can aid in this transition, and many recent technologies that were expected to radically change schooling have instead been used in ways that perpetuate existing systems and their attendant inequalities. The first half of the book discusses the brief histories, limited successes, and challenges of three types of large-scale technology-driven learning environments: instructor-guided, such as lectures taught through massive open online courses (MOOCs); algorithm-guided (e.g., Khan Academy); and peer-guided (e.g., the online coding community known as Scratch). Reich gives a solid accounting of the conditions needed for success with these models, the difficulties and limitations involved in adopting them in K–12 schooling, and the challenges that arise when we attempt to compare different approaches to one another. He argues that although we might think that the availability of a technology is its biggest limiter, the truth is that educational systems are simply not constructed to allow for experimentation and new ways of learning. Reich describes himself as committed to “methodological pluralism.” He supports the use of an array of learning tools and mechanisms, although he confesses to a particular admiration for peer-guided environments. He argues, however, that the incentive structures in formal education do not encourage the more innovative and deeper learning that can blossom in these environments. If we insist on maintaining current methods of assessment and ranking, which center on individual achievement, then peer-guided instruction will remain relegated to the sidelines. The second part of the book expands on the challenges of implementing educational technologies. Reich's main argument here is that educational systems are inherently conservative and that change will happen, albeit slowly and incrementally, only if technology designers, teachers, and administrators work in partnership to understand the desired learning goals and the parameters that define and constrain the learning environments. One of the most intractable pieces of the educational technology puzzle is the need to effectively conduct large-scale assessment, especially when the skills being assessed are not things that computers can do. Here, Reich cites a humorous example of an automated grading system giving high marks to an essay that begins with the technically grammatically correct sentence: “Educatee on an assassination will always be a part of mankind.” At the end of the book, Reich offers four questions that he finds especially useful to consider when examining a new large-scale educational technology. Perhaps the most useful question is the first: “What's new?” Despite what “edtech evangelists” might claim, new technologies often have closely related ancestors that can help predict their success, he argues. In the end, however, new technologies alone are unlikely to have a substantial impact on schooling. We must also be open to changing educational goals and expectations according to the possibilities offered by emergent technologies. Reviewed by Arti Garg 4 In Blockchain Chicken Farm , Xiaowei Wang reveals the myriad ways that technology is transforming our lives. They unveil, for example, the unexpected connections that exist between industrial oyster farming in rural China, livestream-fueled multilevel marketing schemes in the United States, and the app-enabled gig economy in which Chinese influencers participate. Following the threads of places and people woven together by new technologies, Wang helps readers trace the patterns emerging in the tapestry of our tech-infused world. Each chapter provides a view into not just how we use technology but why and to what end. Emphasizing the often-hidden human engine that powers our app-driven economy, Wang exposes the flaw in our tendency to conflate societal and cultural aspirations with the promises of technology and challenges us to honestly measure what value technology delivers. In the 21st century, they argue, we demand that technologists solve the problems that our governments and communities have not. In doing so, we inadvertently empower companies to exploit and amplify those same problems. Most of Wang's vignettes relate to Chinese agriculture. This decision, which roots the narrative in the visceral language of human sustenance, grounds the heady subject matter. The titular example takes readers to the GoGoChicken farm in Sanqiao, a “dreamlike” village that sits in one of the poorest regions in China. Here, Wang introduces the straw-hatted “Farmer Jiang,” who has partnered with his village government and a blockchain company to sell free-range chickens via an e-commerce site. Jiang's chickens sell for RMB 300 (∼$35) each, an amount equal to 6% of the average annual household income in that part of China. Wang explains that high-profile failures of regulatory oversight have left many Chinese with a deep distrust of the food supply chain and that upper-class Chinese urbanites will pay a premium for reassurance about food safety, which, in this case, takes the form of a vacuum-sealed chicken that comes with a QR code revealing blockchain-logged details of its life on the farm. Wang suggests that Americans, driven by concerns over animal welfare, may desire similar reassurance about their food's provenance. In both China and America, they observe, technology allows the upper class to buy its way around governmental and societal shortcomings at prices that are out of reach for most people. Technology does not correct the intrinsic problems, and most cannot reap the benefits of the technological “solutions.” Without resorting to an overly romanticized notion of rural wisdom, Wang treats individuals like Jiang, whose future remains uncertain owing to the vagaries of e-commerce supply chains, with respect and empathy. Because of this, they largely succeed in their goal of reframing our understanding of technology as neither the cause of nor the solution to our problems but rather as a force reshaping the human experience in fundamental ways. Reviewed by Heather Bloemhard 5 The Secret Lives of Planets by Paul Murdin includes a plethora of information about our Solar System. Murdin covers planets, asteroids, moons, dwarf planets, and more, approximately one per chapter. Even exoplanets—the planets that orbit a star other than our Sun—are referenced frequently, although not in their own chapter. Using only a few images, Murdin illustrates the historical and physical concepts that surround each of these elements in prose peppered with anecdotes from his own career as an astronomer. While the book's tone is pleasant and conversational, the discussions are often technical in nature, and I worry that some readers may be frustrated by its many tangents and loose organizational structure. For example, in his discussion of the formation of Mercury, Murdin references the formation of exoplanets, the discovery of 'Oumuamua, and Earth's fossil record. The same chapter also refers to Earth and Venus to help explain orbital eccentricity and precession, but this analogy may fall short for lay readers. I was also disappointed that Murdin relied almost exclusively on the accomplishments of European men to tell the story of how our understanding of the Solar System emerged over time. He writes, for example, of Nicolaus Copernicus's revelations about the geometry of our solar system but neglects the work of Muslim astronomers who developed models of heliocentric orbits hundreds of years earlier. Murdin is far from alone in this misstep, but it is well worth striving to do better. Despite these criticisms, every reader will learn something from this ambitious book. Did you know, for example, that some scientists once believed there were oases of vegetation on Mars, or that others believed that martians might try to colonize Earth? From the exchange of planetary material by way of meteorites to the formation of asteroids, Murdin covers a wide range of astronomical topics, including the aurora of Jupiter, the mysteries of Uranus, and the potential of the moons of Jupiter and Saturn to support recognizable life. I found Murdin's personal recollections to be the most compelling feature of The Secret Lives of Planets . He tells the story of how, as a student, he observed the shadows cast by the tops of clouds of different heights on Venus using a telescope similar to the one used by Galileo and uses this anecdote as a starting point to explain what the Italian astronomer discovered about the planet. Recounting the time he observed the launch of Cassini-Huygens, a probe sent to Saturn's moon Titan, Murdin explains what scientists had hoped to learn from this mission and what they ended up discovering. He also discusses attending the 2006 International Astronomical Union conference, where a debate was held about the definition of a planet, and reveals what it was like to cast a vote on the final decision. In the end, there is much to recommend The Secret Lives of Planets as an introductory text on our solar system. Reviewed by Peter Reczek 6 Modern cancer therapies are often the result of years of targeted research and development, making it easy to forget that many of the field's early breakthroughs had as much to do with chance as they did with preparation. In The Great Secret , Jennet Conant recounts one such breakthrough, which was made in the wake of a deadly disaster. Conant's engrossing story is set in the Italian port town of Bari, which was used as an important staging area for the distribution of supplies supporting Allied troops as they pushed north through Italy during World War II. On 2 December 1943, a day that would later be referred to as “a little Pearl Harbor,” German military aircraft sank more than 20 Allied ships anchored in Bari, leading to the loss of more than 1000 Allied servicemen and Italian civilians. Lieutenant Colonel Stewart Alexander, a medical officer attached to General Eisenhower's headquarters in North Africa, was sent to coordinate medical relief efforts. In Bari, Alexander found “a nightmarish scene.” In the aftermath of the air raid, “The walking wounded staggered in [to the hospital] unaided, suffering from shock, burns, and exposure after having been in the cold water for hours before being rescued. Others had to be supported, as they cradled fractured arms in improvised slings or dragged mangled limbs…Almost all of them were covered in thick, black crude oil,” writes Conant. In addition to the acutely injured, Alexander discovered victims whose injuries had emerged days after the attack and could not be attributed to the percussive effects of the bombing. After analyzing the positions of the ailing seamen, Alexander reported that an American Liberty ship, the John Harvey , was the source of the problem, speculating that it likely contained a secret cache of nitrogen mustard (i.e., mustard gas). Both the American and British governments denied any such cache, but Conant reveals that Alexander persisted, and his controversial report—which, crucially, documented a decrease in white blood cell counts in the victims—was accepted by the Allied High Command with a classification of “Secret.” After the war, Colonel C. P. “Dusty” Rhoads, who had been Alexander's superior during the Bari investigation, reasoned that an agent that reduced white blood cells might be useful in treating some forms of leukemia. While serving as the first director of the Sloan Kettering Institute, Rhoads oversaw a clinical trial to test nitrogen mustards as potential therapeutic agents for the treatment of neoplastic disease. The results exceeded expectations. “In their first attempt to treat patients with inoperable lung cancer with nitrogen mustard, the Memorial team reported that of the thirty-five patients, 74 percent showed some clinical improvement” writes Conant. Many similar compounds, collectively known as alkylating agents, are still the foundation of the combination chemotherapy used to treat some forms of leukemia. Drawing largely from archival research, Conant relies on a loose conversational style to convey a fast-paced medical detective story that demonstrates how careful scientific observation can yield unexpected benefits and serves as a reminder of the difficult choices made by governments to balance public health and secrecy in matters of security. Reviewed by Esha Mathew 7 In quantum physics, entanglement is a property wherein two particles are inextricably linked. Put another way, entangled particles are never truly independent of each other, no matter the distance between them. It is fitting then that Entanglements is an anthology of short stories about inextricably linked people and the impact of emerging technologies on their relationships. A talented set of authors, with deft editing by Sheila Williams, explore the full spectrum of intimacy and technology to great effect. As an added visual treat, illustrations by Tatiana Plakhova punctuate each story with a blend of science, mathematics, and art that complements the subject matter. Even with the length limitations of a short story, the world-building in this compilation is frequently full and often insidiously terrifying, particularly in those stories that use the familiar as breadcrumbs to lure the reader in. The very first tale, “Invisible People” by Nancy Kress, begins with a mundane morning routine and carefully layers in a story about two parents reeling from an unsanctioned genetic experiment on their child. In “Don't Mind Me,” Suzanne Palmer uses the shuffle between high school classes as a foundation on which to build a story about how one generation uses technology to enshrine its biases and inflict them on the next. The ethical implications in these stories offer fodder enough for plenty of late-night discussions. It is also chilling how entirely possible many of the fictional futures seem. But looking forward need not always be bleak. This volume balances darker-themed stories with those in which technology and people collide in uplifting and charming ways. In Mary Robinette Kowal's “A Little Wisdom,” for example, a museum curator, aided by her robotic therapy dog–cum–medical provider, finds the courage within herself to inspire courage in others and save the day. Meanwhile, in Cadwell Turnbull's “Mediation,” a scientist reeling from a terrible loss finally accepts her personal AI's assistance to start the healing process. And in arguably the cheekiest tale in this compilation, “The Monogamy Hormone,” Annalee Newitz tells of a woman who ingests synthetic vole hormones to choose between two lovers, delivering a classic tale of relationship woes with a bioengineered twist. With such a dizzying array of technologies discussed in relation to a range of human emotion and behavior, readers may experience cognitive whiplash as they move from one story to the next. But it is definitely worth the risk. The 10 very different thought experiments presented in this volume make for a fun ride, revealing that human relationships will continue to be as complicated and affirming in the future as they are today. I would recommend the Netflix approach to this highly readable collection: Binge it in one go, preferably with a friend. Reviewed by Joseph B. Keller 8 The U.S. police system is experiencing a reckoning. Protesters across the country (and around the world) have taken to the streets, arguing that police brutality disproportionately harms minority communities, and the current value of policing is being debated by city councils, lawmakers, and members of the news media. Into this tumultuous context enters Sarah Brayne's book, Predict & Surveil: Data, Discretion, and the Future of Policing . A sociologist by training, Brayne synthesizes interview data and field notes from 5 years of observation within the Los Angeles Police Department, employing a firsthand ethnographic approach to reveal how big data are currently used in tech-forward police departments in America. She chronicles both consequential and mundane interactions between officers, civilians, and data. For example, she documents officers uploading license plate numbers, field interview notes, traffic citations, and potential gang affiliations onto a private industry data platform, as well as their active surveillance of hotspots in Los Angeles predicted to be criminogenic. This fly-on-the-wall perspective captures the human aspect of a police force grappling with automated systems and machine-learning decisions in real time, juxtaposing the experiences of individual officers with institutional directives being handed down from administrators and lawmakers. Many police departments contend that the adoption of predictive analytics can improve objectivity and transparency, reduce bias, and increase accountability. Yet Brayne's book reveals how few of these metrics actually improve with predictive policing and exposes the scant evidence that supports the idea that it reduces crime rates. On the contrary, she insists, predictive policing raises glaring civil rights concerns and reinforces harmful racial biases. We all leave digital traces throughout our daily lives, and innocent people can be caught in the dragnet and cataloged in a digital criminal justice system, where a case can be built from benign data. Police unions, Brayne notes, often vehemently oppose the tracking of their own officers. She records incidents of officers turning off their car locator signals, for example, as well as other tactics used to thwart tech-infused managerial oversight. Many officers view policing as an art form rather than a scientific system that can be optimized. To some, big data policing threatens their sense of police instincts and identity. “They worry that they will become nothing more than line workers and insist that their years of accumulated experiential knowledge is irreplaceable,” observes Brayne. Brayne's book raises timely issues relevant to mass surveillance and policing amid a growing debate about facial recognition systems, which makes their omission from this work notable. Although banned in several major American cities, these systems remain a common tool for identifying potential offenders, despite abundant evidence of dangerous inconsistencies. Predictive policing can drive societal inequalities, but Brayne suggests that reducing instances of general police contact may mitigate disparities. In addition to offering immediate recommendations for changing law enforcement in the digital age, she asserts that effective programmatic reforms are typically influenced by external social organizing and guided by communities. (The likelihood of real transformation from within the police system is small, she believes.) For judicial and policing institutions genuinely seeking reform, this book provides powerful observations and analysis that suggest how we can begin. 1. [↵][3]Paris Agreement to the United Nations Framework Convention on Climate Change, 12 December 2015, TIAS No. 16-1104. 2. [↵][4]World Commission on Environment and Development, Our Common Future (Oxford Univ. Press, 1987). [1]: #ref-1 [2]: #ref-2 [3]: #xref-ref-1-1 "View reference 1 in text" [4]: #xref-ref-2-1 "View reference 2 in text"


How to Define and Execute Your Data and AI Strategy · Harvard Data Science Review

#artificialintelligence

Over the past decade, many organizations have come to recognize that their future success will depend on data and AI (artificial intelligence) capabilities. Expectations are high and companies are heavily investing in the area. However, our experience advising organizations in diverse industries suggests that many have also become disillusioned in their journey to create companywide, data-driven business transformation. This article discusses some of the common pitfalls in the implementation of data and AI strategies and gives recommendations for business leaders on how to successfully include data and AI in their business processes. These recommendations address the core enablers for data and AI capabilities, from setting the ambition level to hiring the right talent and defining the AI organization and operating model. Many companies are currently investing in data and artificial intelligence (AI). Since the terminology varies, the activities may be called AI, advanced analytics, data science, or machine learning, but the goals are the same: to increase revenues and efficiency in current business and to develop new data-enabled offerings. In addition, many companies see an increasing responsibility to contribute their AI expertise toward humanitarian and social matters. It is well understood that to stay competitive in the digital economy, the company's internal processes and products need to be smart--and smartness comes from data and AI. Over the past 4 years, our company DAIN Studios has been involved in more than 40 Data and AI initiatives in different companies and industries in Finland, Germany, Austria, Switzerland, and the Netherlands. Our clients are typically large, publicly listed companies.


When Humans and Machines Make Joint Decisions: A Non-Symmetric Bandit Model

arXiv.org Machine Learning

As machine learning algorithms have become on par or even superior to humans in a number of decision making problems (Rajpurkar et al., 2017; Silver et al., 2018), the idea that humans might be assisted by computer programs across a large variety of tasks has gained momentum. Already today, automated decision making systems are being used to predict cardiac arrests, credit scores, and recidivism (Tonekaboni et al., 2018; Board of Governors, 2007; Angwin et al., 2016). An emerging literature asks how humans, who often remain the final decision makers, can and should interact with such systems (Tonekaboni et al., 2019; Lucic et al., 2020). Machine learners, in turn, want to understand how algorithms should be designed such that interaction with humans is as fruitful as possible (Carroll et al., 2019). In most real-world decision making problems, humans have access to information that is unobservable to any algorithm. In the medical domain, doctors can obtain important information from personal interaction with patients (Goldenberg and Engelhardt, 2019). In judicial bail, judges may base their decision on the behavior of the defendant in the courtroom (Lakkaraju et al., 2017). From a machine learning point of view, such unobserved variables arise not due to a failure of the algorithm's designer to collect them. Instead, it is a property of many real-world decision problems that formulating all relevant aspects as inputs to an algorithm is impossible. 1


AI Regulation: Has the Time Arrived? - InformationWeek

#artificialintelligence

Is artificial intelligence getting too smart (and intrusive) for its own good? A growing number of nations have concluded that it's time to take a close look at AI's impact on an array of critical issues, including privacy, security, human rights, crime, and finance. A proposal for an international oversight panel, the Global Partnership on AI, already has the support of six members of The Group of Seven (G7), an international organization comprised of nations with the largest and most advanced economies. The G7's dominant member, the United States, remains the only holdout, claiming that regulation could hamper the development of AI technologies and hurt US businesses. The Global Partnership on AI and OECD's G20 AI principles represent a good first step toward building a worldwide AI regulatory structure, noted Robert L. Foehl, an executive-in-residence for business law and ethics at Ohio University.