Collaborating Authors


Enterprise architects take charge of the digital revolution


Joe McKendrick is an author and independent analyst who tracks the impact of information technology on management and markets. As an independent analyst, he has authored numerous research reports in partnership with Forbes Insights, IDC, and Unisphere Research, a division of Information Today, Inc. Enterprise architects have been adding a new designation to their titles: digital enterprise architect. That's because their roles have been expanding over the past few years, particularly with data analytics being added to their repertoires. That's the word from Thomas Erl, CEO of Arcitura Education, which provides technology skills training to thousands of professionals across the globe, and co-author of A Field Guide to Digital Transformation. "It's a new era for enterprise architects," he says.

Responsible adoption of AI in a cloud environment

MIT Technology Review

Thank you for joining us on “The cloud hub: From cloud chaos to clarity.” The transformative potential of algorithmic systems, the reach of their effects, combined with the paucity of supervision, can bring certain reputational, financial, and ethical risks. Responsible AI is required to provide assurance to users and build continuous trust in AI-based systems.…

Healthcare Ethics in AI: Can software Make Ethical Decisions?


As data analytics and other digital innovations become more broadly adopted in healthcare, artificial intelligence will move from an executive role to a supporting position in clinical decision-making. Hospitals are previously using AI tools to expand custom care strategy, verify patients in for appointments, and inquire "How can I pay my bill?" To respond to fundamental questions like. Healthcare ethics in AI is gaining traction as an "intelligent associate" for physicians and practitioners. AI helps radiologists examine images quicker and organize them in a good manner.

It's never too early to get your AI ethics right


We all know when AI crosses an ethical line. What's less easy is understanding what each of these examples have in common, and drawing lessons that apply to early-stage companies. There are plenty of broad statements of AI ethics principles, but few tools for putting them into practice, especially ones tuned for the harsh realities of startups tight on money and time. That challenge extends to VCs too, who must increasingly attempt to assess whether founders have thought through how customers, partners and regulators might react to the ways they're using artificial intelligence. Even when founders have the best intentions, it's easy to cut corners.

La veille de la cybersécurité


There is no denying that artificial intelligence (AI) plays a significant role in how we go about our daily lives. From predictive searches and automated translations to futuristic use cases like self-driven cars, AI has captured the imagination of managers, CXOs, tech workers and end users alike. That said, ask any two individuals what AI is, and one is very likely to get two conflicting answers. This is not just among ordinary people but also top-level decision-makers. Many business leaders want to understand how AI will impact their business.

The autonomous enterprise is near, but there are still some missing pieces


Joe McKendrick is an author and independent analyst who tracks the impact of information technology on management and markets. As an independent analyst, he has authored numerous research reports in partnership with Forbes Insights, IDC, and Unisphere Research, a division of Information Today, Inc. Building and supporting the artificial intelligence infrastructure that is guiding our businesses is not an easy job. The applications, data and networks behind the scenes have to perform as close to flawlessly as possible, in real time. The good news is AI itself can be employed to provide relief to stressed IT teams. AIOps - artificial intelligence for IT operations - is paving the way to autonomous operations of critical enterprise systems.

Artificial intelligence: MEPs want the EU to be a global standard-setter


On Tuesday, the European Parliament adopted the final recommendations of its Special Committee on Artificial Intelligence in a Digital Age (AIDA). The text, adopted with 495 votes to 34, and 102 abstentions, says that the public debate on the use of artificial intelligence (AI) should focus on the technology's enormous potential to complement human labour. It notes that the EU has fallen behind in the global race for tech leadership. There is a risk that standards will be developed elsewhere, often by non-democratic actors, while MEPs believe the EU needs to act as a global standard-setter in AI. The EU should not always regulate AI as a technology, say MEPs, and the level of regulatory intervention should be proportionate to the type of risk associated with the particular use of an AI system. The report will feed into upcoming parliamentary work on AI, in particular the AI Act, which is currently being discussed in the Internal Market and Consumer Protection (IMCO) and the Civil Liberties, Justice and Home Affairs (LIBE) committees.

The quest for explainable AI


We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand. This "black box" characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the "why" of any particular action is often more important than the "what." This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI's logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors.