Collaborating Authors


Artificial intelligence and robotics uncover hidden signatures of Parkinson's disease


NEW YORK, NY (March 25, 2022) – A study published today in Nature Communications unveils a new platform for discovering cellular signatures of disease that integrates robotic systems for studying patient cells with artificial intelligence methods for image analysis. Using their automated cell culture platform, scientists at the NYSCF Research Institute collaborated with Google Research to successfully identify new cellular hallmarks of Parkinson's disease by creating and profiling over a million images of skin cells from a cohort of 91 patients and healthy controls. "Traditional drug discovery isn't working very well, particularly for complex diseases like Parkinson's," noted NYSCF CEO Susan L. Solomon, JD. "The robotic technology NYSCF has built allows us to generate vast amounts of data from large populations of patients, and discover new signatures of disease as an entirely new basis for discovering drugs that actually work." "This is an ideal demonstration of the power of artificial intelligence for disease research," added Marc Berndl, Software Engineer at Google Research. "We have had a very productive collaboration with NYSCF, especially because their advanced robotic systems create reproducible data that can yield reliable insights."

Growth Alert! Robotics Rise to $70 Billion by 2028


Many believe that Wall Street is always an arm and a leg ahead of us. That it's impossible for the little guys to find amazing companies before it is too late. Well, with a little bit of help, I can steer you into mega trends that Paul and I think will flourish for years to come. Paul has a way of looking at the market, finding trends and investing in the future like no one else. America 2.0 and the Fourth Industrial Revolution is the biggest wealth-building era we'll ever see.

Nvidia shares rise as FYQ3 results top expectations, forecast higher, as demand for AI chips 'surges'


Graphics chip powerhouse Nvidia this afternoon reported fiscal Q3 revenue and profit that both topped Wall Street's expectations, and an outlook for this quarter's revenue that was higher as well, driven by record sales of chips for data centers, especially those that crunch artificial intelligence programs. However, revenue from chips used for crypto mining plunged, the company said. The report sent Nvidia shares up almost 4% in late trading. CEO and co-founder Jensen Huang in prepared remarks called the quarter's results "outstanding," noting the company had "record revenue" for its data center chips. Added Huang, "Demand for NVIDIA AI is surging, driven by hyperscale and cloud scale-out, and broadening adoption by more than 25,000 companies. Huang mentioned the company's GTC event last week, noting it "was our most successful yet, highlighting diverse applications, including supply-chain logistics, cybersecurity, natural language processing, quantum computing research, robotics, self-driving cars, climate science and digital biology.

Waymo will start testing self-driving cars in New York City


Much of Waymo's self-driving vehicle testing has largely focused on warm climates, but it's about to give those machines a harsher trial. Waymo will start driving its autonomous Chrysler Pacifica vans in New York City on November 4th. This and a later wave of Jaguar I-Pace EVs will rely on human drivers to map streets and learn from the environment, but the goal is clearly to achieve full autonomy. The test will focus on Manhattan below Central Park (aka midtown and lower Manhattan), including the financial district and a portion of New Jersey through the Lincoln Tunnel. All tests will operate during daylight.

Making machine learning trustworthy


Machine learning (ML) has advanced dramatically during the past decade and continues to achieve impressive human-level performance on nontrivial tasks in image, speech, and text recognition. It is increasingly powering many high-stake application domains such as autonomous vehicles, self–mission-fulfilling drones, intrusion detection, medical image classification, and financial predictions ([ 1 ][1]). However, ML must make several advances before it can be deployed with confidence in domains where it directly affects humans at training and operation, in which cases security, privacy, safety, and fairness are all essential considerations ([ 1 ][1], [ 2 ][2]). The development of a trustworthy ML model must build in protections against several types of adversarial attacks (see the figure). An ML model requires training datasets, which can be “poisoned” through the insertion, modification, or removal of training samples with the purpose of influencing the decision boundary of a model to serve the adversary's intent ([ 3 ][3]). Poisoning happens when models learn from crowdsourced data or from inputs they receive while in operation, both of which are susceptible to tampering. Adversarially manipulated inputs can evade ML models through purposely crafted inputs called adversarial examples ([ 4 ][4]). For example, in an autonomous vehicle, a control model may rely on road-sign recognition for its navigation. By placing a tiny sticker on a stop sign, an adversary can evade the model to mistakenly recognize the stop sign as a yield sign or a “speed limit 45” sign, whereas a human driver would simply ignore the visually nonconsequential sticker and apply the brakes at the stop sign (see the figure). Attacks can also abuse the input-output interaction of a model's prediction interface to steal the ML model itself ([ 5 ][5], [ 6 ][6]). By supplying a batch of inputs (for example, publicly available images of traffic signs) and obtaining predictions for each, a model serves as a labeling oracle that enables an adversary to train a surrogate model that is functionally equivalent to the model. Such attacks pose greater risks for ML models that learn from high-stake data such as intellectual property and military or national security intelligence. ![Figure][7] Adversarial threats to machine learning Machine learning models are vulnerable to attacks that degrade model confidentiality and model integrity or that reveal private information. GRAPHIC: KELLIE HOLOSKI/ SCIENCE When models are trained for predictive analytics on privacy-sensitive data, such as patient clinical data and bank customer transactions, privacy is of paramount importance. Privacy-motivated attacks can reveal sensitive information contained in training data through mere interaction with deployed models ([ 7 ][8]). The root cause for such attacks is that ML models tend to “memorize” ancillary parts of their training data and, at prediction time, inadvertently divulge identifying details about individuals who contributed to the training data. One common strategy, called membership inference, enables an adversary to exploit the differences in a model's response to members and nonmembers of a training dataset ([ 7 ][8]). In response to these threats to ML models, the quest for countermeasures is promising. Research has made progress on detecting poisoning and adversarial inputs to limiting what an adversary may learn by just interacting with a model to limit the extent of model stealing or membership inference attacks ([ 1 ][1], [ 8 ][9]). One promising example is the formally rigorous formulation of privacy. The notion of differential privacy promises to an individual who participates in a dataset that whether your record belongs to a training dataset of a model or not, what an adversary learns about you by interacting with the model is basically the same ([ 9 ][10]). Beyond technical remedies, the lessons learned from the ML attack-defense arms race provide opportunities to motivate broader efforts to make ML truly trustworthy in terms of societal needs. Issues include how a model “thinks” when it makes decisions (transparency) and fairness of an ML model when it is trained to solve high-stake inference tasks for which bias exists if those decisions were made by humans. Making meaningful progress toward trustworthy ML requires an understanding about the connections, and at times tensions, between the traditional security and privacy requirements and the broader issues of transparency, fairness, and ethics when ML is used to address human needs. Several worrisome instances of biases in consequential ML applications have been documented ([ 10 ][11], [ 11 ][12]), such as race and gender misidentification, wrongfully scoring darker-skin faces for higher likelihood of being a criminal, disproportionately favoring male applicants in resume screenings, and disfavoring black patients in medical trials. These harmful consequences require that the developers of ML models look beyond technical solutions to win trust among human subjects who are affected by these harmful consequences. On the research front, especially for the security and privacy of ML, the aforementioned defensive countermeasures have solidified the understanding around blind spots of ML models in adversarial settings ([ 8 ][9], [ 9 ][10], [ 12 ][13], [ 13 ][14]). On the fairness and ethics front, there is more than enough evidence to demonstrate pitfalls of ML, especially on underrepresented subjects of training datasets. Thus, there is still more to be done by way of human-centered and inclusive formulations of what it means for ML to be fair and ethical. One misconception about the root cause of bias in ML is attributing bias to data and data alone. Data collection, sampling, and annotation play a critical role in causing historical bias, but there are multiple junctures in the data processing pipeline where bias can manifest. From data sampling to feature extraction, from aggregation during training to evaluation methodologies and metrics during testing, bias issues manifest across the ML data-processing pipeline. At present, there is a lack of broadly accepted definitions and formulations of adversarial robustness ([ 13 ][14]) and privacy-preserving ML (except for differential privacy, which is formally appealing yet not widely deployed). Lack of transferability of notions of attacks, defenses, and metrics from one domain to another is also a pressing issue that impedes progress toward trustworthy ML. For example, most ML evasion and membership inference attacks illustrated earlier are predominantly on applications such as image classification (road-sign detection by an autonomous vehicle), object detection (identifying a flower from a living room photo with multiple objects), speech processing (voice assistants), and natural language processing (machine translation). The threats and countermeasures proposed in the context of vision, speech, and text domain hardly translate to one another, often naturally adversarial domains, such as network intrusion detection and financial-fraud detection. Another important consideration is the inherent tension between some trustworthiness properties. For example, transparency and privacy are often conflicting because if a model is trained on privacy-sensitive data, aiming for the highest level of transparency in production would inevitably lead to leakage of privacy-sensitive details of data subjects ([ 14 ][15]). Thus, choices need to be made as to the extent that transparency is penalized to gain privacy, and vice versa, and such choices need to be made clear to system purchasers and users. Generally, privacy concerns prevail because of the legal implications if they are not enforced (for example, patient privacy with respect to the Health Insurance Portability and Accountability Act in the United States). Also, privacy and fairness may not always develop synergy. For example, although privacy-preserving ML (such as differential privacy) provides a bounded guarantee on indistinguishability of individual training examples, in terms of utility, research shows that minority groups in the training data (for example, based on race, gender, or sexuality) tend to be negatively affected by the model outputs ([ 15 ][16]). Broadly speaking, the scientific community needs to step back and align the robustness, privacy, transparency, fairness, and ethical norms in ML with human norms. To do this, clearer norms for robustness and fairness need to be developed and accepted. In research efforts, limited formulations of adversarial robustness, fairness, and transparency must be replaced with broadly applicable formulations like what differential privacy offers. In policy formulation, there needs to be concrete steps toward regulatory frameworks that spell out actionable accountability measures on bias and ethical norms on datasets (including diversity guidelines), training methodologies (such as bias-aware training), and decisions on inputs (such as augmenting model decisions with explanations). The hope is that these regulatory frameworks will eventually evolve into ML governance modalities backed by legislation to lead to accountable ML systems in the future. Most critically, there is a dire need for insights from diverse scientific communities to consider societal norms of what makes a user confident about using ML for high-stake decisions, such as a passenger in a self-driving car, a bank customer accepting investment recommendations by a bot, and a patient trusting an online diagnostic interface. Policies need to be developed that govern safe and fair adoption of ML in such high-stake applications. Equally important, the fundamental tensions between adversarial robustness and model accuracy, privacy and transparency, and fairness and privacy invite more rigorous and socially grounded reasonings about trustworthy ML. Fortunately, at this juncture in the adoption of ML, a consequential window of opportunity remains open to tackle its blind spots before ML is pervasively deployed and becomes unmanageable. 1. [↵][17]1. I. Goodfellow, 2. P. McDaniel, 3. N. Papernot , Commun. ACM 61, 56 (2018). [OpenUrl][18] 2. [↵][19]1. S. G. Finlayson et al ., Science 363, 1287 (2019). [OpenUrl][20][Abstract/FREE Full Text][21] 3. [↵][22]1. J. Langford, 2. J. Pineau 1. B. Biggio, 2. B. Nelson, 3. P. Laskov , Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK, J. Langford, J. Pineau, Eds. (Omnipress, 2012), pp. 1807–1814. 4. [↵][23]1. K. Eykholt et al ., Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 1625–1634. 5. [↵][24]1. F. Tramèr, 2. F. Zhang, 3. A. Juels, 4. M. K. Reiter, 5. T. Ristenpart , Proceedings of the 25th USENIX Security Symposium, Austin, TX (USENIX Association, 2016), pp. 601–618. 6. [↵][25]1. A. Ali, 2. B. Eshete , Proceedings of the 16th EAI International Conference on Security and Privacy in Communication Networks, Washington, DC (EAI, 2020), pp. 318–338. 7. [↵][26]1. R. Shokri, 2. M. Stronati, 3. C. Song, 4. V. Shmatikov , Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA (IEEE, 2017), pp. 3–18. 8. [↵][27]1. N. Papernot, 2. M. Abadi, 3. U. Erlingsson, 4. I. Goodfellow, 5. K. Talwar , arXiv:1610.05755 [stat.ML] (2017). 9. [↵][28]1. I. Jarin, 2. B. Eshete , Proceedings of the 7th ACM International Workshop on Security and Privacy Analytics (2021), pp. 25–35. 10. [↵][29]1. J. Buolamwini, 2. T. Gebru , Proceedings of Conference on Fairness, Accountability and Transparency, New York, NY (ACM, 2018), pp. 77–91. 11. [↵][30]1. A. Birhane, 2. V. U. Prabhu , Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (IEEE, 2021), pp. 1537–1547. 12. [↵][31]1. N. Carlini et al ., arXiv:1902.06705 [cs.LG] (2019). 13. [↵][32]1. N. Papernot, 2. P. McDaniel, 3. A. Sinha, 4. M. P. Wellman , Proceedings of 3rd IEEE European Symposium on Security and Privacy (London, 2018), pp. 399–414. 14. [↵][33]1. R. Shokri, 2. M. Strobel, 3. Y. Zick , Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY (2021); . 15. [↵][34]1. V. M. Suriyakumar, 2. N. Papernot, 3. A. Goldenberg, 4. M. Ghassemi , FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM, 2021), pp. 723–734. [1]: #ref-1 [2]: #ref-2 [3]: #ref-3 [4]: #ref-4 [5]: #ref-5 [6]: #ref-6 [7]: pending:yes [8]: #ref-7 [9]: #ref-8 [10]: #ref-9 [11]: #ref-10 [12]: #ref-11 [13]: #ref-12 [14]: #ref-13 [15]: #ref-14 [16]: #ref-15 [17]: #xref-ref-1-1 "View reference 1 in text" [18]: {openurl}?query=rft.jtitle%253DCommun.%2BACM%26rft.volume%253D61%26rft.spage%253D56%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [19]: #xref-ref-2-1 "View reference 2 in text" [20]: {openurl}?query=rft.jtitle%253DScience%26rft.stitle%253DScience%26rft.aulast%253DFinlayson%26rft.auinit1%253DS.%2BG.%26rft.volume%253D363%26rft.issue%253D6433%26rft.spage%253D1287%26rft.epage%253D1289%26rft.atitle%253DAdversarial%2Battacks%2Bon%2Bmedical%2Bmachine%2Blearning%26rft_id%253Dinfo%253Adoi%252F10.1126%252Fscience.aaw4399%26rft_id%253Dinfo%253Apmid%252F30898923%26rft.genre%253Darticle%26rft_val_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Ajournal%26ctx_ver%253DZ39.88-2004%26url_ver%253DZ39.88-2004%26url_ctx_fmt%253Dinfo%253Aofi%252Ffmt%253Akev%253Amtx%253Actx [21]: /lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoic2NpIjtzOjU6InJlc2lkIjtzOjEzOiIzNjMvNjQzMy8xMjg3IjtzOjQ6ImF0b20iO3M6MjI6Ii9zY2kvMzczLzY1NTYvNzQzLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ== [22]: #xref-ref-3-1 "View reference 3 in text" [23]: #xref-ref-4-1 "View reference 4 in text" [24]: #xref-ref-5-1 "View reference 5 in text" [25]: #xref-ref-6-1 "View reference 6 in text" [26]: #xref-ref-7-1 "View reference 7 in text" [27]: #xref-ref-8-1 "View reference 8 in text" [28]: #xref-ref-9-1 "View reference 9 in text" [29]: #xref-ref-10-1 "View reference 10 in text" [30]: #xref-ref-11-1 "View reference 11 in text" [31]: #xref-ref-12-1 "View reference 12 in text" [32]: #xref-ref-13-1 "View reference 13 in text" [33]: #xref-ref-14-1 "View reference 14 in text" [34]: #xref-ref-15-1 "View reference 15 in text"

AI Learns to Predict Human Behavior from Videos


New York, NY--June 28, 2021--Predicting what someone is about to do next based on their body language comes naturally to humans but not so for computers. When we meet another person, they might greet us with a hello, handshake, or even a fist bump. We may not know which gesture will be used, but we can read the situation and respond appropriately. In a new study, Columbia Engineering researchers unveil a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects. "Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours," said Carl Vondrick, assistant professor of computer science at Columbia, who directed the study, which was presented at the International Conference on Computer Vision and Pattern Recognition on June 24, 2021. "Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology."

UiPath, robotic process automation startup, tops expectations in first quarterly report


UiPath, the robotic process automation startup that came public April 21st, this afternoon reported fiscal Q1 revenue that easily topped Wall Street's expectations, and a surprise profit where the Street had been expecting a net loss, and an outlook for this quarter's revenue that was higher as well. Despite the upbeat report, UiPath shares fell 9% in late trading. CEO and co-founder Daniel Dines called the quarter "an exceptionally strong start to fiscal year 2022." Dines noted first-quarter annualized recurring revenue, or ARR, rose over 64 percent, year-over-year, to $653 million, calling it, "a testament to our leadership position in enterprise software automation." We believe automation is the next layer in the software stack.

3 Artificial Intelligence Stocks Leading the New Wave


Sometimes, a new technology will change the world forever. 5,000 years ago, a nameless Sumerian started marking clay tablets with a stylus, and invented writing; a little over three centuries ago, the steam engine took its place in our lives; early in the last century, Henry Ford came up with the assembly line. There’s no telling what innovation will prove to be game-changing; but it is possible to narrow the field down. And that brings us to AI. Artificial Intelligence, AI, may just be the next big idea. It’s not quite new – computer scientists and programmers have been working on ‘intelligent machines’ since the 1950s, at least – but the tech is finally maturing, and autonomous computers, capable of collating data and making decisions in real time, are no longer a pipe dream. The implications are staggering. Practical AI makes it possible for machines to learn, and to apply that learning. AI programs underly advanced voice and facial recognition systems and fraud detection programs, applications that depend on pattern recognition. More advanced AI is being applied to the automotive industry, where it is used to monitor automobile systems in real time – and to permit driverless vehicles. And this has not been ignored by Wall Street. Analysts say that plenty of compelling investments can be found within this space. With this in mind, We’ve opened up TipRanks’ database, and pulled three AI stocks that are on the leading edge of the technology. Importantly, all three earn Moderate or Strong Buy consensus ratings from the analyst community, and boast considerable upside potential. TuSimple Holdings (TSP) The first AI stock we’re looking at here, TuSimple Holdings, is deeply involved in the autonomous vehicle industry. The company is working on AI systems that will power self-driving trucks, allowing for greater efficiency and safety in the long-haul trucking industry. TuSimple has developed an advanced autonomous driving system specifically for the needs of the trucking industry; the company’s AI backs a long-range perception system that can spot, recognize, and identify objects as far away as 1,000 meters. In another achievement, TuSimple last summer launched an Autonomous Freight Network, through which the company will address the trucking industry’s challenges. TuSimple’s AI tech will allow the company’s trucks to conduct long-haul freight runs. The AI will monitor sensor systems to keep the truck on the road, and navigate to the destination – in all weather, and even in traffic conditions. To raise capital, TuSimple held its IPO last month, offering 33.75 million shares to the public at $40 per share. Of those shares, 27 million were offered by the company, with an existing shareholder putting 6.75 million shares on the market. TuSimple received the proceeds from the shares it sold directly, totaling over $1.08 billion before expenses. Writing from Canadian investment bank RBC, analyst Joseph Spak notes that TuSimple is highly speculative – but that if it succeeds, the rewards will be enormous. “We understand concerns about vetting the technology, adoption and the path towards revenue and profitability. But if TuSimple succeeds, the equity value is significantly higher. As such, we view TuSimple very much like a venture investment in the public markets or perhaps, a biotech stock. The upside opportunity is massive. Proof points (milestones, orders) along the way should increase the market’s confidence in TuSimple’s mid-term targets and long-term opportunity, thereby increasing its stock price,” Spak explained. In line with his comments, Spak rates TSP an Outperform (i.e. Buy), and sets a $52 price target that suggests an upside of 44% in the next 12 months. (To watch Spak’s track record, click here) Overall, TuSimple personifies everything that risk-loving investors want in the stock market. It uses cutting edge tech; it has staked out a position in a field that is not quite here, but is coming; and it’s an early adopter. While still in early stages of building out its products and AI systems, the stock has attracted 7 analyst reviews – 6 to Buy, and 1 to Hold – giving it a Strong Buy consensus rating. The shares are selling for $36.08, and their $54.70 average price target imply a one-year upside of ~52%. (See TSP stock analysis on TipRanks) Nvidia Corporation (NVDA) Next up, Nvidia, is one of the giants of the silicon microprocessor industry. These are the computer chips that make all of the high tech systems possible. Nvidia was the eighth largest chip maker last year, with more than $16 billion in total sales, up 53% from the year before. Nvidia’s chief connection to AI is through the automotive industry. The company has long sold chips to car makers – automotive business makes up between 5% and 10% of Nvidia’s sales – but the car makers over the past year have been ordering more AI capable systems. Nvidia delivers chips and associated packages that allow an autonomous vehicle’s AI system to build perception, mapping, planning, and monitoring capabilities. Nvidia is working on transferring its automotive AI systems into the data center segment; the monitoring needs of large server stacks are comparable to those of autonomous vehicles, and will benefit from the application of machine learning. Covering NVDA for Baird, 5-star analyst Tristan Gerra rates the stock an Outperform (i.e. Buy) along with an $800 price target, which implies ~45% upside. The bull thesis is based on "Nvidia’s strong near-term positioning in AI data center markets and longer-term opportunities across many accelerated computing applications." (To watch Gerra’s track record, click here) "As Nvidia increasingly moves to platform solutions targeting and enabling all AI markets, while diversifying its architecture offering, the company is poised to over time dominate data center. Omniverse gives us an early glimpse of a virtual 3D world which Nvidia is at the forefront and ultimately yielding to a matrix computing world. More near term, GTC-announced foray into CPUs will expand Nvidia's computing TAM," Gerra opined. Overall, no fewer than 27 analysts have put reviews on NVDA on record, and of those, 24 are to Buy against just 3 to Hold. NVDA shares are selling for $550.34; the average price target of $682.20 implies an upside of 24% from that level. (See Nvidia stock analysis on TipRanks) Upstart Holdings (UPST) We’ll finish in financial tech, where Upstart Holdings has applied AI technology to power a lending platform. Using AI, the company aims to evaluate borrowers to determine actual risk levels and creditworthiness. A clearer understanding of the natural risks of lending money will allow lenders to approve more transactions, give otherwise marginal borrowers greater access to capital, and provide cost savings on both ends. Upstart boasts that its AI analysis platform has helped more than 698,000 customers to acquire loans, and that its model provides for 27% more loan approvals than traditional credit-scoring methods. Upstart’s AI evaluates 1,600 data points, and results in borrowers accessing funds at 16% lower rates than would otherwise be possible. The company has been in business since 2012, and went public on the NASDAQ in December of 2020. The IPO saw the company make 9 million shares made available to the public at $20 each, raising $180 million. In March of this year, Upstart released its first quarterly report as a publicly traded entity. The company reported $86.7 million in total revenues, up 39% from one year earlier. Of that total, $84.4 million was derived from usage fees. For the full year 2020, Upstart saw a 42% yoy increase in revenue, to $233.4 million. Among the bulls is Piper Sandler analyst Arvind Ramnani, who is impressed by both the company’s model, and its forward prospects. "We expect Upstart to expand its market share well beyond its primary product focus of unsecured personal loans, and its recently announced auto loans... Key to Upstart’s AI offering is its a) inherent training data advantage backed by the >1,620 variables aggregated to inform their models; b) AI algorithms that have been extensively tested and refined; c) Over 10.5M discrete repayment events that further validate the data and algorithms. Upstart’s SaaS-based revenue model (only ~1% balance sheet loan exposure) has the ability to deliver upside to our 58% CAGR (2020-2023E), in a massive market ($700B NT; $3.4T LT opportunity),” Ramnani opined. To this end, the analyst rates UPST shares an Overweight (i.e. Buy), and his $143 price target implies an upside of 65%. (To watch Ramnani’s track record, click here) Let’s take a look at how the rest of the Street sees 2021 panning out for UPST. Based on 4 Buys and 2 Holds, the stock has a Moderate Buy consensus rating. The average price target is $123.50 suggesting a 34.5% upside potential from the trading price of $91.82. (See UPST stock analysis on TipRanks) To find good ideas for AI stocks trading at attractive valuations, visit TipRanks’ Best Stocks to Buy, a newly launched tool that unites all of TipRanks’ equity insights. Disclaimer: The opinions expressed in this article are solely those of the featured analysts. The content is intended to be used for informational purposes only. It is very important to do your own analysis before making any investment.

Artificial Intelligence: Empowering Futuristic Automotive Vehicles · Wall Street Call


Artificial Intelligence (AI) helps the vehicle to take decision in complex environment. AI is utilized in automobiles industry for smart mobility. At present, automotive industry has employed advanced driver assistance system (ADAS) and with increase amount of embedded intelligent the industry is progressing towards semi-autonomous vehicle. AI enables real-time recognition of surroundings and automates the vehicle mobility, controls in-vehicle systems, and eventually prevents accident. The various applications of AI in automobile sector is road tracking, capturing driver's gesture and expression, passenger experience, fleet management, weather monitoring, predictive maintenance, location search, E-payment and in-vehicle system control.

How an Automated Data Labeling Platform Fuels Self-driving Industry?


NEW YORK, NY / ACCESSWIRE / August 26, 2020 / "I'm extremely confident that self-driving cars or essentially complete autonomy will happen, and I think it will happen very quickly," Tesla CEO Elon Musk said in a virtual speech to the World Artificial Intelligence Conference in July, 2020. Musk mentioned Tesla will have basic functionality for level-five complete autonomy this year. The self-driving vehicles is not just hot in Silicon Valley. In China, the largest automobile market worldwide, companies are also getting on board to develop autonomous driving technology, including China's internet search tycoon Baidu, also referred to as the "Google of China." Baidu has been developing the autonomous driving technology through its "Apollo" project (also known as open-source Apollo platform) launched in April 2017.