AuthorsM. Li, Y. Sun, H. Lu, S. Maharjan and Z. Tian
TitleDeep Reinforcement Learning for Partially Observable Data Poisoning Attack in Crowdsensing Systems
AfilliationCommunication Systems
Project(s)Simula Metropolitan Center for Digital Engineering, The Center for Resilient Networks and Applications
Publication TypeJournal Article
Year of Publication2020
JournalIEEE Internet of Things Journal (Early Access)
Publisher IEEE

Crowdsensing systems collect various types of data from sensors embedded on mobile devices owned by individuals. These individuals are commonly referred to as workers that complete tasks published by crowdsensing systems. Because of the relative lack of control over worker identities, crowdsensing systems are susceptible to data poisoning attacks which interfering with data analysis results by injecting fake data conflicting with ground truth. Frameworks like TruthFinder can resolve data conflicts by evaluating the trustworthiness of the data providers. These frameworks somehow make crowdsensing systems more robust since they can limit the impact of dirty data by reducing the value of unreliable workers. However, previous work has shown that TruthFinder may also be affected by data poisoning attack when the malicious workers have access to global information. In this paper, we focus on partially observable data poisoning attacks in crowdsensing systems. We show that even if the malicious workers only have access to local information, they can find effective data poisoning attack strategies to interfere with crowd sensing systems with TruthFinder. First, we formally model the problem of partially observable data poisoning attack against crowdsensing systems. Then, we propose a data poisoning attack method based on deep reinforcement learning, which helps malicious workers jeopardize with TruthFinder while hiding themselves. Based on the method, the malicious workers can learn from their attack attempts and evolve the poisoning strategies continuously. Finally, we conduct experiments on real-life data sets to verify the effectiveness of the proposed method.

Citation Key27388