Google’s AI-powered personal assistant has the potential to reshape the landscape of voice search with its machine learning capabilities. However, due to the very nature of AI and machine learning, using Google Assistant comes with a trade-off: your personal data.

Being built into Allo, along with recently announced Pixel phone and Google Home device, the more Google Assistant is called upon the more data it collects from its users.

Of course, this is all in an effort to deliver personal results and provide answers to more personal questions such as “when is my next appointment?”. It’s designed to learn about people’s habits and preferences in order to become smarter and more accurate.

As previously reported, all conversations on Allo are unencrypted. There is the option to turn encryption on, but then you will no longer be able to use Google Assistant within the app.

Over time Google Assistant will learn about where you’ve been, where you’re going, what you like to eat, what kind of music you listen to, how you communicate, who your best friends are, and so on. As Gizmodo points out, it’s even capable of accessing information from anything stored on your device.

While many are embracing the idea of a personalized virtual assistant, it’s important to point out the drawbacks as well. Development of this technology relies on relinquishing your security and privacy to Google.

What Google will do in the future with all this data is anyone’s guess, but part of the company’s business model is to make money through targeted advertising based on user data. In fact, Google has already stated in this help article that it will be doing as much:

”If you interact with the Google Assistant, we treat this similarly to searching on Google and may use these interactions to deliver more useful ads.

Google is certainly not the only company who has a responsibility to serve advertisers. For example, part of Apple’s business model is to serve advertisers through its iAd network.

A major difference is that the iAd network does not collect data from Siri, Apple’s own virtual assistant, nor does it collect data from iMessage, call history, Contacts or Mail. Apple’s CEO, Tim Cook, has confirmed this in the company’s privacy policy:

”We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t “monetize” the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you.”

One of the most important questions people have to think about going forward is: how much privacy and personal data are you willing to give up in order to experience the benefits offered by AI-powered technologies?

Source : searchenginejournal


Categorized in Internet Privacy

A new Google pilot program now allows publishers to describe CSV and other tabular datasets for scientific and government data.

 Google added a new structured data type named Science datasets. This is a new markup, which technically can be used by Google for rich cards/rich snippets in the Google search results interface.

Science data sets are “specialized repositories for datasets in many scientific domains: life sciences, earth sciences, material sciences, and more,” Google said. Google added, “Many governments maintain repositories of civic and government data,” which can be used for this as well.

Here is the example Google gave:

For example, consider this dataset that describes historical snow levels in the Northern Hemisphere. This page contains basic information about the data, like spatial coverage and units. Other pages on the site contain additional metadata: who produces the dataset, how to download it, and the license for using the data. With structured data markup, these pages can be more easily discovered by other scientists searching for climate data in that subject area.

This specific schema is not something that Google will show in the search results today. Google said this is something they are experimenting with: “Dataset markup is available for you to experiment with before it’s released to general availability.” Google explained you should be able to see the “previews in the Structured Data Testing Tools,” but “you won’t, however, see your datasets appear in Search.”

Here are the data sets that qualify for this markup:

  • a table or a CSV file with some data;
  • a file in a proprietary format that contains data;
  • a collection of files that together constitute some meaningful dataset;
  • a structured object with data in some other format that you might want to load into a special tool for processing;
  • images capturing the data; and
  • anything that looks like a dat aset to you.

Aaron Bradley seemed to first spot this and said “with [a] pilot program, Google now allows publishers to describe CSV and other tabular datasets.”

Source : http://searchengineland.com/

Categorized in Search Engine

While robots and computers will probably never completely replace doctors and nurses, machine learning/deep learning and AI are transforming the healthcare industry, improving outcomes, and changing the way doctors think about providing care.

Machine learning is improving diagnostics, predicting outcomes, and just beginning to scratch the surface of personalized care.

Imagine walking in to see your doctor with an ache or pain. After listening to your symptoms, she inputs them into her computer, which pulls up the latest research she might need to know about how to diagnose and treat your problem.  You have an MRI or an xray and a computer helps the radiologist detect any problems that could be too small for a human to see. Finally, a computer looks at your medical records and family history and compares that with the best and most recent research to suggest a treatment protocol to your doctor that is specifically tailored to your needs.

Industry analysts IDC predict that 30 percent of providers will use cognitive analytics with patient data by 2018.  It’s all starting to happen, and the implications are exciting.


CBI insights identified 22 companies developing new programs for imaging and diagnostics. This is an especially promising field into which to introduce machine learning because computers and deep learning algorithms are getting more and more adept at recognizing patterns — which, in truth, is what much of diagnostics is about.

An IBM-backed group called Pathway Genomics is developing a simple blood test to determine if early detection or prediction of certain cancers is possible.

Lumiata has developed predictive analytics tools that can discover accurate insights and make predictions related to symptoms, diagnoses, procedures, and medications for individual patients or patient groups.


IBM’s Watson has been tasked with helping oncologist make the best care decisions for their patients.  The Care Trio team has developed a three-pronged approach that helps doctors devise and understand the best care protocols for cancer patients.

The CareEdit tool helps teams create clinical practice guidelines that document the best course of treatment for different types of cancers. CareGuide uses the information from CareEdit into a “clinical decision support system” to help doctors choose the right treatment plan for an individual patient. And CareView is an analysis tool that can evaluate the outcome of past clinical decisions and identify patients who received different treatments than the recommendations. This kind of retrospective can help doctors refine their guidelines, closing the circle back to the CareEdit tool.

The team hopes that the Care Trio will improve clinical outcomes and increase survival rates for cancer patients while still reducing treatment costs for providers. The first version is currently being deployed at a large cancer treatment center in Italy.

In a completely different field, Ginger.io is developing an app to remotely deliver mental health treatments. The app allows people to analyze their own moods over time, learn coping strategies that have been developed by doctors, and access additional support as needed.

Follow up care

But the advances don’t stop with diagnosis or treatment.

One of the biggest hurdles in health care is hospital readmittance. Doctors around the world struggle with how to keep their patients healthy and following their treatment recommendations when they go home.

AiCure is using mobile technology and facial recognition technologies to determine if a patient is taking the right medications at the right time to help doctors confirm that the patient is taking their medications and alert them if something goes wrong.

NextIT is developing a digital health coach, similar to a virtual customer service rep on an ecommerce site. The assistant can prompt questions about the patient’s medications and remind them to take the medicine, ask them about symptoms, and convey that information to the doctor.

The Caféwell Concierge app uses IBM’s Watson’s natural language processing (NLP) to understand users health and wellness goals and then devise and provide the right balance of nudges and alerts so users can meet their targets and the app can reward them.

And this is just the beginning.  As these technologies develop, new and improved treatments and diagnoses will save more lives and cure more diseases. The future of medicine is based in data and analytics.

Source : http://www.forbes.com/

Categorized in Online Research

Demand for big data expertise is growing every day, as more and more companies become aware of the benefits of collecting and analyzing data. Unfortunately, the number of people trained to analyze this data isn’t growing in line with the demand. This creates a challenge for companies looking to hire expert people, especially for smaller firms less able to compete on salary and benefits.

The good news is that, even if you’re having trouble recruiting data scientists because of stiff competition, or if you simply haven’t got the budget to recruit, you can still access big data skills. Hiring in-house staff isn’t the only way – let’s look at some of the best alternatives.

Focus on attracting or developing certain skills

I believe there are six key skills required to work with big data: analytical skills, creativity, a knack for maths and statistics, computer science skills,business acumen, and communication skills. Rather than hiring people with these skills, you may be able to build on your existing skills in-house. For example, you may have an IT person who already covers the computer science side of things who would love the opportunity to learn about analytics. You could pair them up with a creative, strategic thinker who understands the business’s needs and you’re well on your way to having the skills you need without hiring anyone new.

Nurture your existing talent

Developing your existing people is a brilliant place to start, especially in smaller businesses or companies on a tight budget. Increasingly colleges and universities are putting courses online for free. Some of the courses offer certificates of completion or other forms of accreditation, some don’t. But the skills learned should be more important than a piece of paper.

Excellent examples include the University of Washington’s Introduction to Data Science course, which is available online at Coursera, or Stanford’s Statistics One course, also available on Coursera. For those interested in the programming side of things, check out Codecademy’s Python course.

Thinking outside the box

It’s worth considering unusual sources where you might be able to recruit help, either on a permanent basis or on a temporary basis (such as getting help to analyse data for a one-off project). Universities with a data science department, or any kind of data institute for that matter, are a good place to start. You could offer an internship, taking on some students to help with an analysis project, or you could see if the university is open to a joint project of some kind. If you’ve got data to crunch, they may very well be up for crunching it! In return you could mentor students on the key skills needed to survive in business or offer interview training and practice.

Thinking outside the box is really about finding creative ways to pull the necessary skills together in whichever way works for you. It may be easy to find someone with statistical and analytical skills but they may fall short on business insights or communication skills … but that needn’t be a problem if other staff could help supplement those skills.

Also consider whether there’s an opportunity to create an industry group with other companies facing similar challenges to your own. Even if you’re not keen to share detailed data with these companies (they probably don’t want to with you either), you can still pool resources to get data analysis done on a large scale without necessarily sharing your private data. Remember that data can always be aggregated or anonymised to remove specifics that you don’t want shared.

Harness the power of the crowd

You might consider crowdsourcing your big data project. Crowdsourcing is a way of using the power of a crowd to complete a task. (If you haven’t heard of crowdsourcing before, you’ve probably heard of crowdfunding platforms, like Kickstarter, which operate on a similar basis – using the power of a crowd to achieve a goal.)

A few crowdsourcing platforms, like Kaggle, now allow thousands of data scientists to sign up for big data projects. A business can then upload the data they have, say what problem they need solving, and set a budget for the project. It’s a great option for companies with a small amount to spend, or those who want to test the waters. But it’s also a regular resource for big firms like Facebook FB -0.31% and Google GOOGL -0.29%. Some firms are even known to recruit full-time analysts from crowdsourcing platforms if they’ve been blown away by the work they’ve done. This gives you an idea of the quality of talent on crowdsourcing platforms.

Tapping into external service providers

If none of the above options work for you, you can still make the leap into big data. A great way to supplement missing skills, particularly when it comes to the statistical, analytical and computer science aspects, is to hire external providers to handle your data and analytics needs. There are more and more big data providers and contractors springing up who are able to source or capture data on your behalf and analyse it (or work with data you already have). Some big data providers are household names, like Facebook and IBM, but you certainly aren’t limited to big blue-chip companies. There are tons of smaller providers out there who have a great deal of experience working with small and medium-sized firms, or expert knowledge of specific sectors.

What I see in practice is that businesses of all shapes and sizes can now access big data skills – and on almost any budget.

Source : http://www.forbes.com/

Categorized in Internet Technology

Data is marketing's most precious commodity — now marketers need to leverage it.

The pace of doing business in today’s ultraconnected world has changed everything. From the way advertisements are bought, sold and displayed, to the way businesses market to their buyers, we’ve entered an entirely new era.

Although some get a bit weepy and nostalgic wishing for “the good old days,” these are exciting times for today’s leading companies. They’re even more thrilling for today’s disruptors.

In today’s marketing organizations, there’s an ongoing war. While there’s immense pressure on the marketing department — from the board — to completely understand the company's buyers, it has also been tasked with understanding the value of every marketing dollar spent. Now, more than ever, marketers are being challenged to display the impact they make on a business’s success or failure.

To do this, marketers need data.

If only it were that easy. You see, today’s marketing departments are dealing with a tremendous amount of complexity. That’s because of two reasons:

1. All-new marketing source systems are coming online at an alarming rate.

2. If marketers want to really understand their buyers’ behavior and how to better connect with them, it will be necessary to invest in many best-of-breed point solutions. This translates into a company’s marketing technology stack getting significantly larger than what we may have experienced even just a handful of years ago.

And if that wasn’t enough of a hurdle to clear, it gets worse. That’s because data isn’t always easy for marketers to get their arms around. Take, for example, the typical marketing campaign for a product launch. Likely an in-house team handles email blasts, search engine optimization and public relations efforts. Creative agencies get tasked with the messaging, collateral, website buildout and event organization. Then, media agencies take care of paid efforts across a variety of channels — TV, radio, digital, etc.

Because of this fractured approach, marketers don't always own all of the data relative to their activities. As you may have expected by now, all of it provides value as businesses try to understand their customers and their respective journeys. Here’s the kicker: When access is granted to data that the marketing department does not directly own, much of the time it’s not at the correct level of detail required to gain actionable insight.

Meanwhile, consumers are exposed to all of these campaign efforts and are, generally speaking, unaware of the idea of channels. In-market buyers simply interact with brands and expect that today’s businesses will engage with them on a segment-of-one basis. And the benchmark on quality and speed continues to rise.

This means that today’s marketers are plagued with a two-headed monster. First, they must figure out how they can do more with the budget and resources they are allocated. Second, they must figure out how to impact the business's bottom line by understanding how their efforts deliver an enriched customer experience, create awareness in a market and more.

It’s not uncommon to think of marketing VPs as MacGyvers, because they must be agile, adaptable and able to creatively work around their less-than-ideal surroundings.

There remains one constant in this situation, however. That is data. Without this precious commodity, today’s marketers are going to be left hobbled, unable to drive value for their companies. And they certainly won't have a chance to determine their contribution or return on investment.

It’s as simple as that.

Source : http://www.cio.com/article/3112186/analytics/why-today-s-marketers-need-data.html

Categorized in Market Research

As search engine optimization (SEO) professionals, we obsess with search data from a wide variety of resources. Which one is best for our clients? Which keyword research tool reveals the most accurate search behaviors when rebuilding a site’s information architecture? Does our web analytics data validate our keyword research?

And, more importantly, did these tools provide your most desired information? Some answers might surprise you.

Keyword research data

I love keyword research tools. I use all of them because I can discover core keyword phrases, which are commonly used across all of the commercial web search engines. And I can also tailor ads and landing pages to searchers who typically use a single, targeted search engine (and it isn’t always Google, as one might imagine).

However, keyword research tools are not a substitute for a knowledgeable and intuitive search engine marketer. All too often, website owners and even experienced search engine optimization professionals launch into a site’s information architecture without gauging user response. As good SEO professionals, we should understand when it is appropriate to implement keywords into a site’s information architecture: when keyword usage overwhelms users, and when keyword usage needs to be more apparent.

This situation occurred recently when I was performing some usability tests on a client site’s revised information architecture. This particular client website is being delivered in multiple languages. We were testing American English, British English, and French. Therefore, the test participants were American, British, and French.

All of the keyword research tools showed the word “student” or “students” (in French, “étudiant” or “étudiants”) as a possible target. The appearance of this word in both keyword research data and in the site’s web analytics data led my client to believe that we should make this area a main category.

If we had relied on the data from keyword research tools, we would have been wrong. If we had relied on the data from web analytics software, we would have been wrong.

The face-to-face user interaction gave us the right answer.

The facial expressions were enough to convince me. Almost every single time the word “student” or “étudiant” appeared during the usability test, I saw confusion. When I asked test participants why they seemed confused, they said that the particular keyword phrase was not appropriate for that type of website. They then placed the student-related information groupings in one of two piles:

  • Discard – Participants felt that the information label and/or grouping did not belong on the website at all.
  • Do not know – Participants were unsure whether the information label and/or grouping did or did not not belong on the website.

The discard pile won, with over 90% from all three language groups.

Now, imagine if this company did NOT have one-on-one interaction with searchers during the redesign process and only relied on keyword research tools. How much time and money might have been wasted?

Keyword research data is not the only type of data that can be easily misinterpreted.

Web analytics search data

One search metric that clients and prospects inevitably mention is “stickiness.” In other words, one of their search marketing goals is to increase the number of page views per visitor via search engine traffic, especially if the site is a publisher, blog, or news site. Increasing the number of page views per visitor provides more advertising opportunities as well as a positive branding impact. The average time on site (if it is longer than two minutes) is also commonly viewed as a positive search metric.

Or so it might seem. Here is an example.

Many SEO professionals, including me, provide blog optimization for a wide variety of companies (ecommerce, news, software, etc.). Not only do we provide keyword research for blogs, we must also monitor the effectiveness of keyword-driven traffic via web analytics data.

Upon initial viewing, the blog’s analytics data might indicate increased stickiness. Searchers are reading more blog entries. Searchers are engaged. Therefore, the blog content is great…that is a common conclusion.

For an exploratory usability test, I ask test participants to tell me about a blog post that they found very helpful. I asked them why they liked the blog’s content, and I listen very closely for keyword phrases. Audio and/or video recording makes this job a little easier.

When I asked test participants to refind desired information on a blog on the lab’s computer, I did not hear, “This blog content is great!” Comments I frequently heard were:

  • “I can’t find this [expletive] thing.”
  • “Now where could it be? I saw it here before….”
  • “I think this was posted in [month/day/year]….”
  • “Where the [expletive] is it?”

As you might imagine, the use of expletives became more and more frequent with the increased number of page views.

Sure, searchers who discover great blog content might bookmark the URL, or they might link to it from a “Links and Resources” section of their web site, or they might cite the URL in a follow-up post on another website. All of these actions and associated behaviors make it easier for searchers to refind important information.

However, when I review web analytics data, I often find that site visitors do not take these actions as frequently as people might think. Instead, with careful clickstream analysis combined with usability testing, I see that the average page view per visitor metric is heavily influenced by frustrated refinding behaviors.


I have always believed that search engine optimization is part art, part science. Certainly, keyword research data and web analytics data are very much part of the “science” part of SEO.

Nevertheless, the “art” part of SEO comes into play when interpreting this data. By listening to users and observing their search behaviors, having that one-on-one interaction, I can hear keywords that are not used in query formulation. I study facial expressions and corresponding mouse movements that are associated with keywords. I see how keywords are formatted in search engine results pages (SERPs) and corresponding landing pages, and how searchers react to that formatting and placement.

I cannot imagine my job as an SEO professional without keyword research tools and web analytics software. In addition, I cannot imagine my job as an SEO professional without one-on-one searcher interaction. What do you think? Have any of you learned something that keyword research tools and/or web analytics data did not reveal?


Categorized in Search Engine
Page 4 of 4


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.