Presented during the 2012 ACM Conference on Computer Supported Cooperative Work (CSCW 2012), the following original research on “the value of microblog content” is shared for your review and consideration.
Prepared by Paul Andre (Carnegie Mellon University), Michael Bernstein (MIT) and Kurt Luther (Georgia Institute of Technology), “Who Gives a Tweet: Evaluating Microblog Content Value” offers quantifiable insight into the perceived value of Twitter “tweets” through the lens of content, context and evolving social norms.
Primary questions considered as part of this milestone study of over 43,000 volunteer ratings on Twitter include:
“Conventional wisdom exists around these questions, but to our knowledge this is the first work to rigorously examine whether the commonly held truths are accurate. Further, by collecting many ratings, we are able to quantify effect sizes. A better understanding of content value will allow us to improve the overall experience of microblogging.” (Study Authors)
Predictors of “tweet” value in this study were based on “worth reading””neutral” or “not worth reading” ratings of individual tweets from eight specific categories that included.
Additionally, the reasons for determining whether a tweet was “liked” or “disliked” ranged from:
Reasons for Liking
Reasons for Disliking
PDF Version of Study: Click here.
Source: “Who Gives a Tweet: Evaluating Microblog Content Value” – Paul Andre (Carnegie Mellon University), Michael Bernstein (MIT) and Kurt Luther (Georgia Institute of Technology) – as prepared for CSCW’12, February 11–15, 2012, Seattle, Washington.
This entry was posted on Tuesday, February 21st, 2012 at 1:49 pm. It is filed under Blog Slider, chronology, discover, Live Feed and tagged with research, social media. You can follow any responses to this entry through the RSS 2.0 feed.
Comments are closed.
Since its 2007 introduction, kCura’s Relativity product has become one of the world’s leading attorney review platforms. One of the elements of Relativity’s strong growth and marketplace acceptance has been kCura’s focus on and support of partnerships. Provided as a by-product of review platform research and presented in the form of a simple and sortable table is an aggregation of kCura Premium Hosting Partners and Consulting Partners.
Taken from a combination of public market sizing estimations as shared in leading electronic discovery reports, publications and posts over time, the following eDiscovery Market Size Mashup shares general worldwide market sizing considerations for both the software and service areas of the electronic discovery market for the years between 2013 and 2018.
Provided as a non-comprehensive overview of key and publicly announced eDiscovery related mergers, acquisitions and investments to date in 2014, the following listing highlights key industry activities through the lens of announcement date, acquired company, acquiring or investing company and acquisition amount (if known).
The consensus view is that after the purchase Microsoft will essentially disband Equivio and absorb its technology, its software designs, and some of its experts. Then, as Craig Ball predicts, they will wander the halls of Redmond like the great cynic Diogenes. No one seems to think that Microsoft will continue Equivio’s business.
In my previous post, I found that relevance and uncertainty selection needed similar numbers of document relevance assessments to achieve a given level of recall. I summarized this by saying the two methods had similar cost. The number of documents assessed, however, is only a very approximate measure of the cost of a review process, and richer cost models might lead to a different conclusion.
One distinction that is sometimes made is between the cost of training a document, and the cost of reviewing it. It is often assumed that training is performed by a subject-matter expert, whereas review is done by more junior reviewers. The subject-matter expert costs more than the junior reviewers—let’s say, five times as much. Therefore, assessing a document for relevance during training will cost more than doing so during review.
A critical metric in Technology Assisted Review (TAR) is recall, which is the percentage of relevant documents actually found from the collection. One of the most compelling reasons for using TAR is the promise that a review team can achieve a desired level of recall (say 75% of the relevant documents) after reviewing only a small portion of the total document population (say 5%). The savings come from not having to review the remaining 95% of the documents.
On Oct. 7, 2014, the Wall Street Journal reported that Microsoft had signed a letter of intent to buy what they called an Israel-based text analysis startup company named Equivio . The mainstream business press has virtually no understanding of the e-discovery industry, nor anything having to do with litigation support. They also seem to have no real grasp of what kind of software Equivio and others like it in the industry have created.
By William Webber My previous post described in some detail the conditions of finite population annotation that apply to e-discovery. To summarize, what we care about (or at least should care about) is not maximizing classifier accuracy in itself, but minimizing the total cost of achieving a target level of recall. The predominant cost in […]
ComplexDiscovery | Creative Commons Attribution 4.0 International