DUBLIN, IRELAND--(Marketwired - October 17, 2017) - RecommenderX today announced that it won the Best Use of Data Science In A Start Up Award at the DatSci event held in Dublin on September 21, 2017. DatSci is an annual event that brings together and recognizes the best and brightest that Ireland has to offer in the expanding world of Data Science. RecommenderX is a technology company, focused on helping customers and partners improve productivity, performance, customer engagement, sales and profitability, by transforming Artificial intelligence (AI) to Business Intelligence (BI). RecommenderX is the top spin out of Europe's largest Centre for Data Analytics Insight, with deep domain knowledge in Data Analytics, Artificial Intelligence (AI), Machine Learning (ML), Personalization Technology, Recommender Systems and Explainable AI. "We are thrilled to be an award winner at DatSci 2017," stated Kevin McCarthy, Co-Founder & CTO of RecommenderX. "It is a fantastic validation of the efforts that our world-class team have been making helping companies all over the world harness their data by developing cutting edge applications and solutions that leverage data science and AI technologies."
On Tuesday, NVIDIA unveiled the world's first artificial intelligence (AI) computer designed to drive fully autonomous vehicles by mid-2018. The new system, named Pegasus, extends the NVIDIA Drive PX AI computing platform to operate vehicles with Level 5 autonomy--without steering wheels, pedals, or mirrors. New types of cars will be invented, resembling offices, living rooms or hotel rooms on wheels. "The company hasn't claimed to have developed all the software, hardware, and data needed for automated driving; it's merely announced that it plans to market a chip that in theory could support the hardware and software envisioned for such a system," Walker Smith said.
By modeling human testers, including manual and test automation tasks such as scripting, Appvance has developed algorithms and expert systems to take on those tasks, similar to how driverless vehicle software models what a human driver does. The Appvance AI technology learns from various existing data sources, including learning to map an application fully on its own, various server logs, Splunk or Sumo Logic production data, form input data, valid headers and requests, expected responses, changes in each build and others. The resulting test execution represented real user flows, data driven, with near 100% code coverage. Built from the ground up with DevOps, agile and cloud services in mind, Appvance offers true beginning-to-end data-driven functional, performance, compatibility, security and synthetic APM test automation and execution, enabling dev and QA teams to quickly identify issues in a fraction of the time of other test automation products.
In this post, I'll offer a look at data science's buzzwords from multiple perspectives, namely the theorist, the empirical data scientist, and the press release bluster, which too often is parroted by the mainstream press. Data Scientist: Unlike the toy datasets that long dominated machine learning research, today's big data is sufficiently large that it cannot fit conveniently in main memory on a single workstation. In short, big data is more data than can fit in main memory on a single machine. Theorist: Deep neural networks refer to graphical models in which data is computed upon by successive layers of nodes.
Running OptiX 5.0 on the NVIDIA DGX Station -- the company's recently introduced deskside AI workstation -- will give designers, artists and other content-creation professionals the rendering capability of 150 standard CPU-based servers. To achieve equivalent rendering performance of a DGX Station, content creators would need access to a render farm with more than 150 servers that require some 200 kilowatts of power, compared with 1.5 kilowatts for a DGX Station. Certain statements in this press release including, but not limited to, statements as to: the impact, benefits, performance and availability of NVIDIA OptiX 5.0 SDK and the NVIDIA DGX Station; AI transforming industries and having the potential to turbocharge the creative process are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-Q for the fiscal period ended April 30, 2017.
Total revenue rose 1 percent to $39.85 billion. The elevated performance in the second quarter was due mostly to a lowering of the company's corporate tax rate, from 30 percent down to 10 percent, Chief Financial Officer Bob Shanks acknowledged. But analysts pointed out that with the lower tax rate, that likely means a lower full-year net income than the $9 billion Ford previously guided. Ford Credit's revenue rose 7 percent to $2.7 billion in the quarter.
BEIJING, CHINA--(Marketwired - Sep 12, 2016) - GPU Technology Conference China - NVIDIA (NASDAQ: NVDA) today unveiled the latest additions to its Pascal architecture-based deep learning platform, with new NVIDIA Tesla P4 and P40 GPU accelerators and new software that deliver massive leaps in efficiency and speed to accelerate inferencing production workloads for artificial intelligence services. It fits in any server with its small form-factor and low-power design, which starts at 50 watts, helping make it 40x more energy efficient than CPUs for inferencing in production workloads.3 A single server with a single Tesla P4 replaces 13 CPU-only servers for video inferencing workloads,4 delivering over 8x savings in total cost of ownership, including server and power costs. NVIDIA DeepStream SDK taps into the power of a Pascal server to simultaneously decode and analyze up to 93 HD video streams in real time compared with seven streams with dual CPUs.6 Integrating deep learning into video applications allows companies to offer smart, innovative video services that were previously impossible to deliver. Leap Forward for Customers NVIDIA customers are delivering increasingly more innovative AI services that require the highest compute performance.
Pure Storage (NYSE: PSTG), the market's leading independent all-flash data platform vendor for the cloud era, today announced Pure1 META, it's Artificial Intelligence (AI) platform for delivering on the vision of self-driving storage. This new capability will allow customers to answer questions about new workload deployment, interaction, performance and capacity growth, and workload optimization, helping reduce risk, increase consolidation, and provide better visibility to plan for upgrades or expansions. About Pure Storage Pure Storage (NYSE:PSTG) helps companies push the boundaries of what's possible. Customers who purchase Pure Storage's product offerings should make their purchase decisions based upon products, features and functions that are currently available.
Ride service companies like Uber and Lyft are focused on the technology of self-driving cars, but what about everythingn else? Lyft and nuTonomy will be doing R&D in the Boston area at the Raymond L. Flynn marine park and nearby at Seaport and Fort Point. During trials, "an engineer from nuTonomy rides in each of its vehicles during testing to observe system performance and assume control if needed," the company said. Following initial trials, Lyft and nuTonomy could expand to gather even more data and learn "about the ideal function, performance and features of an autonomous mobiliy-on-demand service," they say.
Outfitted for machine learning, artificial intelligence, and FinTech, AccelStor all-flash arrays deliver accelerated performance and effortless data processing for storage, enabling enterprise business to run efficiently. "Our NeoSapphire series provides superior-performing flash arrays for SMEs, enterprises and data centers, and helps customers reduce risk, plan resources, and accelerate time to value for enterprise-scale big data deployments." This scalable flash array lays the groundwork for succeeding NeoSapphire models, demonstrating AccelStor's capability to provide large-capacity, feature-rich, high-performance storage arrays with high availability. AccelStor's NeoSapphire all-flash arrays, powered by FlexiRemap software technology, deliver sustained high IOPS for business-critical applications.