How Researchers Are Building Models To Safeguard Private Data In Machine Learning
More machine learning applications are permeating in the tech ecosystem and the data that goes into ML systems is being derived from all sorts of sources -- regardless of its sensitivity. ML algorithms do not realise the aspect of sensitivity as it always looks at data as a way to establish and learn patterns, rather than looking into the who's who of the data. Miscreants might take advantage of this and circumvent the ML systems itself, which can have devastating effects altogether. If that happens, the purpose of ML will completely fail. To counter this, and establish a secure and safe ML environment, researchers are strictly working towards building privacy in ML models.
Jul-19-2018, 19:05:50 GMT
- Country:
- North America > United States
- California (0.05)
- Michigan (0.05)
- Pennsylvania (0.05)
- North America > United States
- Genre:
- Research Report (0.31)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: