Research: Risks of Friendships on Social Networks
Authors: Prepared by Cuneyt Gurcan Akcora, Barbara Carminati and Elena Ferrari (DISTA, Universita` degli Studi dell’Insubria Via Mazzini 5, Varese, Italy), Risks of Friendships on Social Networks is a prepared paper submitted and accepted by the 2012 IEEE Conference on Data Mining (ICDM).
Abstract: In this paper, the authors explore the risks of friends in social networks caused by their friendship patterns, by using real life social network data and starting from a previously defined risk model. Particularly, they observe that risks of friendships can be mined by analyzing users’ attitude towards friends of friends. This allows new insights into friendship and risk dynamics on social networks.
Analysis: Summarized analysis from this paper includes observations on:
Applicability: Risks of Friendships on Social Networks offers unique insight into the privacy risks of online friendships and provides salient considerations for the development of risk models that could be applied to social network users.
Access: (PDF) http://bit.ly/Xk5mlX (arXiv.org)
This entry was posted on Tuesday, February 19th, 2013 at 2:26 pm. It is filed under chronology, discover and tagged with research, social media. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2013 and 2018.
Provided as a non-comprehensive overview of key and publicly announced eDiscovery related mergers, acquisitions and investments to date in 2014, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
Cormack and Grossman set up an ingenious experiment to test the effectiveness of three machine learning protocols. It is ingenious for several reasons, not the least of which is that they created what they call an “evaluation toolkit” to perform the experiment. They have even made this same toolkit, this same software, freely available for use by any other qualified researchers. They invite other scientists to run the experiment for themselves. They invite open testing of their experiment. They invite vendors to do so too, but so far there have been no takers.
I want to talk about an issue that is attracting attention at the moment: how to select documents for training a predictive coding system. The catalyst for this current interest is “Evaluation of Machine Learning Protocols for Technology Assisted Review in Electronic Discovery”, recently presented at SIGIR by Gord Cormack and Maura Grossman.