Big data is now big business. In recent years, due to the exponential growth of databases (spurred at least in part by social media and cloud storage) and of the capability of technology to undertake data analytics on a massive scale, organisations have started to appreciate the potential hidden value that could be derived from their data.
Kroll Ontrack surveyed over 550 law firm and corporate ediscovery professionals to gauge the biggest trends and impacts in ediscovery in 2014. This was a great year for the world of ediscovery, and now is the perfect time to share some of the interesting 2014 trends with all of you.
There’s never been a phenomenon like Docker . Eighteen months ago, the company took its core technology, which enables IT people to move software easily between different machines by enclosing it in “containers”, and made it open source.
While Americans’ associations with the topic of privacy are varied, the majority of adults in a new survey by the Pew Research Center feel that their privacy is being challenged along such core dimensions as the security of their personal information and their ability to retain confidentiality.
Until a few years ago, there was basically no effort expended to measure the efficacy of eDiscovery. As computer-assisted review and other technologies became more widespread, an interest in measurement grew, in large part to convince a skeptical audience that these technologies actually worked. Now, I fear, the pendulum has swung too far in the other direction and it seems that measurement has taken over the agenda.
In a sea of 600+ e-discovery providers in the US alone, trying to find the right vendor to meet and fulfill your requirements is difficult. Like purchasing a car, you have a choice of vendors that range from local, to regional and national providers. Some that use their own technologies, others that use off-the-shelf products and a few others that provide traditional processing and hosting services spawned from the paper world.
Hadoop, Data Lakes, Predictive Analytics and the Ultimate Demise of Information Governance – Part Two
Information Governance is, or should be, all about finding the information you need, when you need it, and doing so in a cheap and efficient manner. Information needs are determined by both law and personal preferences, including business operation needs. In order to find information, you must first have it. Not only that, you must keep it until you need it. To do that, you need to preserve the information.
The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
The International Standards Organization (ISO) has released two new standards for cloud computing in an attempt to put some order around the loose terminology in cloud computing. If you think you’ve seen this movie before, you’re right.
It’s generally accepted that the more information we have, the better. Knowledge is power, right? And won’t big data lead to better products, more responsive customer service and enhanced shopping experiences? That is true, but all that information also introduces significant cost and risk into an organization.
Many merger and acquisition (“M&A”) agreements lack specific representations and warranties regarding privacy issues. Often, this is because deal lawyers do not recognize potential privacy risks where the target company (the “Target”) lacks e-commerce websites or retail stores that collect consumer data. Nonetheless, significant privacy issues may exist even if the Target is a traditional “brick and mortar” business. Early attention to privacy issues in M&A transaction planning and due diligence can mitigate risks for both buyers and sellers.
Big data and the “internet of things” — in which everyday objects can send and receive data — promise revolutionary change to management and society. But their success rests on an assumption: that all the data being generated by internet companies and devices scattered across the planet belongs to the organizations collecting it. What if it doesn’t?
The companies listed below are the subject of an ongoing and unresolved FCPA-related investigation. The names are current through September 30, 2014. The entries are based on disclosures in SEC filings or credible news reports or both.
Extract: Choosing an e-discovery solution means addressing several interconnected issues. Product demos can be impressive, but don’t be fooled: a tool’s features can be the least important factor for you to consider. KPMG Canada’s Dominic Jaar, partner and national practice leader, information management services, and David Sharpe, manager of e-discovery, offer some key questions you should endeavour to answer while exploring solutions.
More than half of CEOs will have a senior “digital” leader role in their staff by the end of 2015, according to the 2014 CEO and Senior Executive Survey by Gartner, Inc. Gartner said that by 2017, one-third of large enterprises engaging in digital business models and activities will also have a digital risk officer (DRO) role or equivalent.
Major players in the oil and gas industry, particularly oilfield services companies, understand that Big Data analytics can provide valuable insights that will help make exploration, production, manufacturing, and global operations more streamline, safe, and efficient. Leaders in the industry are already implementing Big Data solutions into their everyday operations and reaping the rewards of this long-term investment.
Information management: 5 big questions answered There are many reasons for the dramatic proliferation of data , and this, alongside changing consumer behaviour, is having a a profound effect on the role of the Chief Information Officer. Canon recently held an ‘Information at Work’ event that looked at how data was impacting the workplace, so we caught up with the company’s Director of Information Security, Quentyn Taylor, to find out what messages are coming out of the information segment at present. Here are his responses to our five key questions. TechRadar Pro: What is causing the massive influx of […]
If an organization can’t accurately classify its documents it ends up either drowning in documents because it keeps all of them or risks legal sanctions or operating problems because it discards records it needed to keep. Classification is the prerequisite to any type of information governance. All other automated document classification systems are based on text analysis. These text-based classification systems have one insurmountable problem – they can’t analyze or classify documents that don’t have text or have only poor-quality text. In industries like oil & gas this is a huge problem because in some collections over half of the documents are non-textual.
Faceted classification is an extremely efficient way to remove documents that are not needed by an organization and to remove exact and visual duplicates of the documents that are needed. When used as an integral part of information governance it can reduce required file storage by 90% or more.
Large collections of personal data are valuable. Crooks will try and steal them. This is not news. It’s 2014, so why are companies using security approaches that weren’t really adequate 20 years ago for databases three orders of magnitude smaller than today’s?
The e-discovery community is buzzing about predictive coding, and with good reason. The volume of Electronically Stored Information (ESI) is expanding at a breakneck pace. Every two years the amount of digital data is expected to almost double. Predictably, e-discovery costs are rising. Although the number of documents contained in a gigabyte of ESI varies significantly by file type, even at 5,000 documents per gigabyte, review costs add up fast.
When we talk about “the next Heartbleed” we have to consider that new vulnerabilities are discovered every day and that many of them are just as widespread as Heartbleed.
The proliferation of data and how it is being managed — or in most cases mismanaged — is causing more organizations to question whether they have information assets or liabilities. Two of the major drivers pushing organizations to finally get their data under control are costs and risks.“People are starting to get interested in reducing their overall data in many cases for regulatory issues,” said Dera Nevin, managing director and an electronic discovery lawyer at re:Discovery Law PC.Nevin, who was speaking to the International Legal Technology Association last week at an event hosted by Norton Rose Fulbright Canada LLP, […]
Finding the Signal in the Noise: Information Governance, Analytics, and the Future of Legal Practice
DownloadPDF Cite as: Bennett B. Borden & Jason R. Baron, Finding the Signal in the Noise: Information Governance, Analytics, and the Future of Legal Practice , 20 Rich. J.L. & Tech. 7 (2014), http://jolt.richmond.edu/v20i2/article7.pdf. Bennett B. Borden* and Jason R. Baron** Introduction  In the watershed year of 2012, the world of law witnessed the first concrete discussion of how predictive analytics may be used to make legal practice more efficient. That the conversation about the use of predictive analytics has emerged out of the e-Discovery sector of the law is not all that surprising: in the last decade […]