Show simple item record

dc.contributor.authorReza, Khondker Jahid
dc.contributor.authorIslam, Md Zahidul
dc.contributor.authorEstivill-Castro, Vladimir
dc.contributor.editorLi, T
dc.contributor.editorLopez, LM
dc.contributor.editorLi, Y
dc.date.accessioned2018-12-20T05:08:37Z
dc.date.available2018-12-20T05:08:37Z
dc.date.issued2017
dc.identifier.isbn9781538618295
dc.identifier.doi10.1109/ISKE.2017.8258834
dc.identifier.urihttp://hdl.handle.net/10072/381583
dc.description.abstractSensitive information of an Online Social Network (OSN) user can be discovered through sophisticated data mining, even if the user does not directly reveal such information. Malicious data miners can build a decision tree/forest from a data set having information about a huge number of OSN users and thereby learn general patterns which they can then use to discover the sensitive information of a target user who has not revealed the sensitive information directly. An existing technique called 3LP suggests users shall suppress some information (such as hometown), add and/or shall delete some friendship links to protect their sensitive information (such as political view). In a previous study, 3LP was applied to a training data set to discover the general pattern. It was then applied on a testing data set to protect sensitive information of the users in the testing data set. Once the testing data set was modified following the suggestions made by 3LP the previous study then cross-checked the users' privacy level by using the same general pattern previously discovered from the training data set. However, in this paper, we argue that the general pattern of the training data set will be changed due to the modifications made in the testing data set and hence, the new general pattern should be used to test the privacy level of the users in the testing data set. Therefore, in this study, we use a different attack model where the training data set is different after the initial use of 3LP and an attacker can use any classifiers in addition to decision forests. We also argue that the data utility should be measured along with the privacy level to evaluate the effectiveness of a privacy technique. We also experimentally compare 3LP with another existing method.
dc.description.peerreviewedYes
dc.languageEnglish
dc.publisherIEEE
dc.publisher.placeUnited States
dc.relation.ispartofconferencename12th International Conference on Intelligent Systems and Knowledge Engineering (IEEE ISKE)
dc.relation.ispartofconferencetitle2017 12TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS AND KNOWLEDGE ENGINEERING (IEEE ISKE)
dc.relation.ispartofdatefrom2017-11-24
dc.relation.ispartofdateto2017-11-26
dc.relation.ispartoflocationNanJing, PEOPLES R CHINA
dc.relation.ispartofvolume2018-January
dc.subject.fieldofresearchPattern recognition
dc.subject.fieldofresearchData mining and knowledge discovery
dc.subject.fieldofresearchcode460308
dc.subject.fieldofresearchcode460502
dc.titleSocial media users' privacy against malicious data miners
dc.typeConference output
dc.type.descriptionE1 - Conferences
dc.type.codeE - Conference Publications
gro.hasfulltextNo Full Text
gro.griffith.authorEstivill-Castro, Vladimir


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

  • Conference outputs
    Contains papers delivered by Griffith authors at national and international conferences.

Show simple item record