Interview with Asaf Frige


 
Signals Analytics uses open-source, external data. This means that the data we collect and analyze is publicly available. Our approach is unique because we work with external data sources, unlike most big data solutions that focus on internal data (CRM, ERP, supply chain management, financials) that are retrieved from inside the company.

What Data Does Signals Analytics Work With?

Decision-makers in product development and portfolio management need to understand where their competitors are innovating and what is happening outside of their organization – what new technologies are being developed? What are consumers unmet needs?

 

This is where Signals Anlaytics' solutions come in. To answer these questions we analyze thousands of external data sources (IP’s such as patents and research papers, product launches, consumer data points from social media sources and regulation policies) to assist our clients to make better and more intelligent strategic decisions before they launch new products.

So does that mean Signals Analytics only uses what is available to everyone on the web?

We use data that is open to the public, which comes from both paid and unpaid sources. Unpaid sources include all open available information: government agencies, e-commerce, social media, press releases, and business news. Paid source examples include: patent registries, academic publications, industry journals, financial & product databases, and niche data sources. Instead of going to hundreds of different service and knowledge providers, we cover our target industries' most relevant sources and bring it together in one place. 

How does Signals Analytics use external data to identify consumer unmet needs?

In basic terms: Signals Analytics looks at how companies describe themselves and what product features they heavily market, and cross-analyzes this with what consumers are discussing to identify the gap between the market and consumers' needs. This is something you can only do with different types of external data. For example, we recently looked at how a smart approach to external data could have benefited Keurig, a coffee brewing system, to avoid huge decreases in sales when they launched their new product this past year. We compared how Keurig marketed their top product features to what consumers were discussing.


The results were astounding: Keurig removed features that were vital to the success of their new product, while they highlighted features such as ‘easy to use’ and ‘size’ which did not align with their consumers' needs. Consumer insights showed they cared most about ‘flavor’ and ‘reusable cups’, but Keurig missed these signals and in their next generation product eliminated reusable cups completely. The drop in sales was not surprising, consumers had been talking all along about what they cared about.

How do you determine what data is relevant?

The first step to collecting the right data starts with creating the right approach to know what data is important, and how to structure it. Our subject matter experts choose the most relevant taxonomies out of our repository for each industry, topic, and category we work with, and identify the applicable keywords in order to retrieve and categorize the relevant data. The taxonomy is the "scaffold" of parameters upon which we structure external data in order to convert it into our unified data model. When needed, our data analysts customize the taxonomy according to the client's specific needs. For example, if our client is working in diabetes devices and is interested in trends in diet, we will add elements from taxonomies coming from the nutrition and the microbiome realms. 

Then we construct the relevant text analysis algorithm and apply our proprietary text analysis tool on the data. Our algorithm uses a wide and comprehensive array of terms and keywords in order to capture and classify in the most accurate manner.

A very basic example of enhancing the keyword array would be using both commercial and scientific terms - in the soft drinks industry, for example, commercial terms such as "bubbly" and "fizzy" would be used alongside scientific terms such as "carbon dioxide" in order enhance the text analysis algorithm's reach.

This is how Signals Analytics is both automated and offers fully customized business solutions

Where did you learn all this?

I have over ten years of experience as an officer in the elite military intelligence unit. I was responsible for the technology systems, from analytics to data mining, and the gathering, analysis and dissemination of information of strategic and tactical value — both of which taught me how to make quick decisions in the most effective manner. I aim to bring the best practices from military intelligence to Signals Analytics to address the daily business needs of our clients so they can make impactful, strategic evidence-based decisions.

 
 

Written by Asaf Frige

Asaf Frige, VP of Delivery at Signals Analytics. His role includes managing the team responsible for discovering, identifying, and collecting data for Signals Analytics’ research department and developing search strategies. He graduated magna cum laude with an MBA, with a specialization in Finance, from Bar-Ilan University and a B.A. in Accounting and Economics from Bar-Ilan University. Asaf has 10 years of military experience as an intelligence officer.