Collaborating Authors

Information Technology

Don't Let Tooling and Management Approaches Stifle Your AI Innovation


It is no coincidence that companies are investing in AI at unprecedented levels at a time when they are under tremendous pressure to innovate. The artificial intelligence models developed by data scientists give enterprises new insights, enable new and more efficient ways of working, and help identify opportunities to reduce costs and introduce profitable new products and services. The possibilities for AI use grow almost daily, so it's important not to limit innovation. Unfortunately, many organizations do just that by tethering themselves to proprietary tools and solutions. This can handcuff data scientists and IT as new innovations become available, and results in higher costs than an open environment that supports best-of-breed AI model development and management.

The Case for Transparent AI


I can trace it back to when I watched a video of America's Got Talent. It started with singers, but soon it moved on to other categories, including illusionists. That was enough to tell Facebook's algorithms that I had to be interested in magic and that it should show me more of what it deduced I wanted to see. Now I have to be careful, because if I click on any of that content, it will reinforce the algorithm's notion that I must really be interested in card tricks, and pretty soon that's all Facebook will ever show me. Even if it was all just a passing curiosity.

Why leaders should be using AI in their businesses right now


Artificial Intelligence is no new concept. The phrase was first coined by John McCarthy in 1956[1], when he invited a group of researchers to discuss the notion of'thinking machines' during a conference at Dartmouth College. Since then, it has been a point of fascination for scientists, academics, software developers, and moviemakers alike. Fast-forward to today where you'll find lots of examples hiding in plain sight. From digital assistants like Amazon's Alexa or Apple's Siri, who use AI to learn from user interactions, to automated email responses and search engines predicting what you're looking for.

3 Ways To Improve Cybersecurity In Your AI Infrastructure


AI models, applications and systems are not impervious to cyber-attacks. So, organizations must make efforts to protect their AI infrastructure from such threats. A secure AI infrastructure bodes well for the future of your organization's association with intelligent technology. Due to its obvious list of benefits, the dependence of all types of businesses on AI has increased greatly in the last decade or so. Unfortunately, the heavy reliance on AI also becomes a weakness for businesses, especially when you consider the possibility of cyber-attacks that can affect their AI infrastructure.



The graph represents a network of 1,251 Twitter users whose tweets in the requested range contained "#iiot", or who were replied to or mentioned in those tweets. The network was obtained from the NodeXL Graph Server on Tuesday, 14 September 2021 at 21:00 UTC. The requested start date was Tuesday, 14 September 2021 at 00:01 UTC and the maximum number of tweets (going backward in time) was 7,500. The tweets in the network were tweeted over the 1-day, 16-hour, 41-minute period from Sunday, 12 September 2021 at 07:20 UTC to Tuesday, 14 September 2021 at 00:01 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods.

Walmart to launch autonomous delivery service with Ford and Argo AI – TechCrunch


Walmart has tapped Argo AI and Ford to launch an autonomous vehicle delivery service in Austin, Miami and Washington, D.C., the companies said Wednesday. The service will allow customers to place online orders for groceries and other items using Walmart's ordering platform. Argo's cloud-based infrastructure will be integrated with Walmart's online platform, routing the orders and scheduling package deliveries to customers' homes. Initially, the commercial service will be limited to specific geographic areas in each city and will expand over time. The companies will begin testing later this year.

AI hardware pioneer Cerebras expands access in partnership with cloud vendor Cirrascale


The battle for artificial intelligence hardware keeps moving through phases. Three years ago, chip startups such as Habana Labs, Graphcore, and Cerebras Systems grabbed the spotlight with special semiconductors designed expressly for deep learning. Those vendors then moved on to selling whole systems, with newcomers such as SambaNova Systems starting out with that premise. Now, the action is proceeding to a new phase, where vendors are partnering with cloud operators to challenge the entrenched place of Nvidia as the vendor of choice in cloud AI. Cerebras on Thursday announced a partnership with cloud operator Cirrascale to allow users to rent capacity on Cerebras's CS-2 AI machine running in Cirrascale cloud data centers.

EETimes - AI Startup Deep Vision Raises Funds, Preps Next Chip


Edge AI chip startup Deep Vision has raised $35 million in a series B round of funding led by Tiger Global, joined by existing investors Exfinity Venture Partners, Silicon Motion and Western Digital. The company began shipping its first-generation chip last year. ARA-1 is designed for power-efficient, low-latency edge AI processing in applications like smart retail, smart city and robotics. While the company's name suggests a focus on convolutional neural networks, ARA-1 can also accelerate natural language processing with support for complex networks such as long short-term memory (LSTMs) and recurrent neural networks (RNNs). A second-generation chip, ARA-2 with additional features for accelerating LSTMs and RNNs will launch next year.

DCGAN from Scratch with Tensorflow Keras -- Create Fake Images from CELEB-A Dataset


Generator: the generator generates new data instances that are "similar" to the training data, in our case celebA images. Generator takes random latent vector and outputs a "fake" image of the same size as our reshaped celebA image. Discriminator: the discriminator evaluate the authenticity of provided images; it classifies the images from the generator and the original image. Discriminator takes true of fake images and outputs the probability estimate ranging between 0 and 1. Here, D refers to the discriminator network, while G obviously refers to the generator.

La veille de la cybersécurité


Getting the software right is important when developing machine learning models, such as recommendation or classification systems. But at eBay, optimizing the software to run on a particular piece of hardware using distillation and quantization techniques was absolutely essential to ensure scalability. "[I]n order to build a truly global marketplace that is driven by state of the art and powerful and scalable AI services," Kopru said, "you have to do a lot of optimizations after model training, and specifically for the target hardware." With 1.5 billion active listings from more than 19 million active sellers trying to reach 159 million active buyers, the ecommerce giant has a global reach that is matched by only a handful of firms. Machine learning and other AI techniques, such as natural language processing (NLP), play big roles in scaling eBay's operations to reach its massive audience. For instance, automatically generated descriptions of product listings is crucial for displaying information on the small screens of smart phones, Kopru said.