Sentiment analysis can be used to measure how positively or negatively people express themselves, for example in relation to a given product or brand. It is useful for analyzing and responding to feelings expressed in customer emails, calls, reviews, and more.
Social media contain open and public discussions about every conceivable topic. These discussions can provide invaluable insights into the views and narratives among consumers, influencers, and businesses. But the information is unstructured, in the form of text and images, and spread out across a large number of social media platforms. Making sense of it has typically required programming and data science skills: the data needs to be collected, preprocessed, structured, and analyzed. This post presents a new approach, one which does not require any programming or expert text analytics skills.
The COVID-19 Open Research Dataset (CORD-19) is a public dataset with over 59,000 coronavirus-related scholarly articles. It was prepared by the White House and a coalition of leading research groups. This post shows how Dcipher Analytics can be used to explore the CORD-19 dataset and extract useful information from it.
In this post, we analyze the discussion about Greta Thunberg in the public social media sphere to understand who is talking and what they are saying. We identify key influencers and map the discussion.
In this article, we offer a step-by-step guide to the news media mining approach in Dcipher Analytics. It does not require any programming or advanced analytics skills. Instead of reading articles one by one, the new approach allows you to visually find patterns in large amounts of articles and drill down in article clusters of interest.
Reviewing the academic literature around a topic is typically a qualitative process, where a small number of highly cited academic articles are studied in-depth. But useful finding may also exist in the “long tail” of less cited articles. Depending on the research topic of interest, this “long tail” may consist of thousands or even tens of thousands of articles. To map the entire landscape of academic research around a topic, we therefore need to use a different approach, one which combines quantitative mapping with qualitative analysis.
The digital information universe is growing exponentially. It was only ten years ago, in 2009, that the total amount of digital information exceeded 1 zettabyte – a hundred million times more than what fits in the U.S. Library of Congress, the world’s largest library. Since then it has doubled every second year and is expected to exceed 40 zettabyte next year. The vast majority of this data is unstructured, particularly in the form of text, images, video, and audio.
Netnography is a research method useful for studying online consumer culture. By observing naturally occurring discussions and phenomena on the internet, it seeks to unpack the cultural codes and expressions that influence consumption choices within the communities under study. It views social media as much more than likes, reposts, influencers, and keyword occurrences. To netnographers, social media are manifestation of cultural phenomena, making them ideal places for acquiring a rich and contextualized understanding of consumers. To make sense of such cultural data, the researcher is a fly on the wall, observing but not interfering.
Artificial intelligence (AI) is causing two major shifts in information search. First, it makes a transition from manual and criteria-based search to cognitive and adaptive search. Second, it replaces statistical and hypothesis-based approaches with open-ended and deep learning-powered techniques. This post outlines how anyone, with the right tool, can start searching information using the AI-powered approach.
Traditionally, analysis of free-form text data from surveys have required coding, where researchers read through the answers and manually code them with fixed or emergent categories. To ensure accuracy and consistency, each post needs to be coded by at least two researchers. If the survey is conducted in multiple languages, native speakers of each language need to do the coding. They also need to coordinate with each other to make sure they interpret answers in a consistent way.