Presented during the 2012 ACM Conference on Computer Supported Cooperative Work (CSCW 2012), the following original research on “the value of microblog content” is shared for your review and consideration.
Prepared by Paul Andre (Carnegie Mellon University), Michael Bernstein (MIT) and Kurt Luther (Georgia Institute of Technology), “Who Gives a Tweet: Evaluating Microblog Content Value” offers quantifiable insight into the perceived value of Twitter “tweets” through the lens of content, context and evolving social norms.
Primary questions considered as part of this milestone study of over 43,000 volunteer ratings on Twitter include:
“Conventional wisdom exists around these questions, but to our knowledge this is the first work to rigorously examine whether the commonly held truths are accurate. Further, by collecting many ratings, we are able to quantify effect sizes. A better understanding of content value will allow us to improve the overall experience of microblogging.” (Study Authors)
Predictors of “tweet” value in this study were based on “worth reading””neutral” or “not worth reading” ratings of individual tweets from eight specific categories that included.
Additionally, the reasons for determining whether a tweet was “liked” or “disliked” ranged from:
Reasons for Liking
Reasons for Disliking
PDF Version of Study: Click here.
Source: “Who Gives a Tweet: Evaluating Microblog Content Value” – Paul Andre (Carnegie Mellon University), Michael Bernstein (MIT) and Kurt Luther (Georgia Institute of Technology) – as prepared for CSCW’12, February 11–15, 2012, Seattle, Washington.
This entry was posted on Tuesday, February 21st, 2012 at 1:49 pm. It is filed under Blog Slider, chronology, discover, Live Feed and tagged with research, social media. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Provided as a non-comprehensive overview of over 100 key and publicly announced eDiscovery related mergers, acquisitions and investments since 2001, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2012 and 2017.
There was a time when people believed the earth was flat. Or that humans would never walk on the moon. Or that computers had no place in the law. But then the non-believers proved them wrong. The earth is round, men have walked on the moon, and it is hard to imagine practicing law without a computer.
What about technology-assisted review? Are there myths surrounding TAR that will fall by the wayside as we better understand the process? Will we look back and smile at what people believed about TAR way back then? Turns out, that is already happening. Here are five myths that early TAR adopters believed true but that modern TAR systems prove wrong.
Reasonability is a core concept in the law, right up there with the idea of justice itself. It not only permeates negligence law, it underlies discovery law as well. For instance, a party in litigation, and the attorneys representing them, are required to make reasonable efforts to find relevant documents requested. They are required to make efforts that are good enough to be considered reasonable. But lawyers and litigants are not required to make efforts beyond that; not required to make super-human, stellar efforts, and certainly not perfect efforts.
Beginning in early 2012 the topic of Technology-Assisted Review moved from expert-led explanations to mainstream mentions in legal community articles, opinions, surveys and reports. Provided for your research, review and consideration are a compilation of key headlines and links from online sources on the topic of Technology-Assisted Review from February, 2012, until now.
The data from my Enron review experiment shows that relatively high consistent relevance determinations are possible. The comparatively high overlap results achieved in this study suggest that the problem of inconsistent human relevance determinations can be overcome. All it takes is hybrid multimodal search methods, good software with features that facilitate consistent coding, good SME(s), and systematic quality control efforts, including compliance with the less is more rule.