Hacking deep learning: model inversion attack by example
This article is originally published at https://blogs.rstudio.com/tensorflow/Compared to other applications, deep learning models might not seem too likely as victims of privacy attacks. However, methods exist to determine whether an entity was used in the training set (an adversarial attack called member inference), and techniques subsumed under "model inversion" allow to reconstruct raw data input given just model output (and sometimes, context information). This post shows an end-to-end example of model inversion, and explores mitigation strategies using TensorFlow Privacy.
Thanks for visiting r-craft.org
This article is originally published at https://blogs.rstudio.com/tensorflow/
Please visit source website for post related comments.