fbpx

Google has said that mobile visits to travel sites now represent 40 percent of total travel traffic. Responding to this shift in consumer behavior, Google is introducing a range of new mobile hotel and flight search tools.

Google will now let mobile users filter hotel search results by price or rating (I captured the screens below this morning). In addition, Google says that hotel search will respond to more precise queries such as, “Pet-friendly hotels in San Francisco under $200.”

Google hotel filters

Mobile users will also see new deal labels when a room rate is below traditional price levels. This is similar to a feature that Google previously offered with desktop hotel search results. The company will also provide money-saving “hotel tips.” They appear to be based on travel date flexibility:

We may show Tips to people when they could save money or find better availability by moving their dates slightly. For example, you may see a Tip like, “Save $105 if you stay Wed, Jul 13 – Fri, Jul 15.“

Finally, Google will now offer airfare price tracking. Users can track fare changes on specific routes, airlines and dates. Travel searchers will then receive email alerts and Google Now notifications when prices “either increase or decrease significantly.”

Starting now, these changes will roll out first in the US and later across international markets.

Source:  http://searchengineland.com/google-offers-new-hotel-search-filters-deal-labels-airline-price-notifications-253817

Categorized in Search Engine

Google Search gets a new feature where users will get an alert every time their name is mentioned or appears on the Internet. The feature is called ‘Stay in the Loop,’ through which the search engine giant will notify users on their registered Gmail IDs as to where their names mentioned.

The new feature works as long as a user is logged into his official Google account. Also, they’ve to give Google access to save their Web and App activity which can be enabled via the Activity Controls menu.

“Save your search activity on apps and in browsers to make searches faster and get customized experiences in Search, Maps, Now, and other Google products,” says the Activity Controls page.

How does it work?

To activate the feature, users need to make sure they’re logged into their Google account and have granted Google access to track their Web and App activity. This can be done via the Activity Controls Menu.

Once they’ve granted Google the required access, the Stay in the loop widget shows up at the bottom of the first page of search results. Clicking on the widget takes users to a Google Alert form that already has your username in quotation marks. Once you’ve adjusted the Settings, just click on Create alert, and you’re good to go.

From here on, users can set Google Alerts for their name references. Users can also choose from a number of suggestions to get alerts for such as music, politics, sports and automobiles. Moreover, users can also adjust settings such as email frequency, source types, languages, and region.

Google announced the feature last month in a blog post, though it had not yet been released. Now the search engine giant has officially made the feature live. Along with the ‘Stay in the Loop’ feature, Google has also rolled out several improvements and other features for My Account off-late. The company released the ‘Find your phone’ feature to help users locate their smartphones in case they’re lost or stolen. It also rolled out a new feature where users can access My Account through voice commands.

That being said, several reports suggest that the feature is now live in India. However, I still don’t see the new Stay in the Loop widget that is supposed to appear at the bottom of the first page of search results, even after granting Google access to my Web and App activity via the Activity Controls menu.

Did any of our readers get to see the new Stay in the Loop feature? Do let us know in the comments section below.

http://www.pc-tablet.co.in/google-search-notify-users-metioned-web/38836/

Categorized in Search Engine

Google’s original mobile testing tool came out in 2014, and two years in the land of technology might as well be a lifetime. It was about time they came out with an update, and I’m happy to say it was worth the wait. According to Google, “people are five times more likely to leave a mobile site that isn’t mobile-friendly,” and “nearly half of all visitors will leave a mobile site if the pages don’t load within 3 seconds.” In other, more blatant words, it’s imperative that business owners optimize their sites for mobile.

Read below to find out how to use the newest version of Google’s mobile testing tool and make sure your website is meeting the needs of your mobile customers.

How to Get the New Google Mobile Testing Tool

First things first: you can access the tool from Google’s Search Console’s mobile usability report. Either way, once you’ve arrived at the tool, it’s as simple as entering your website’s URL into the search box, clicking “test now”, and waiting for the results. The home screen will look something like this:

google1

Then, once you enter a URL, your results page will look something like this:

google2

How to Interpret the Google Mobile Testing Tool Results

So now you know how to access the tool (it’s pretty self-explanatory and easy to use, thanks to Google!). Next, you need to know what those results mean. A test is worthless if you can’t use the results to make positive improvements.

In terms of the screenshot above, Google makes it pretty clear that the site is mobile friendly. The big green 99/100 rating for mobile friendliness is a pretty big giveaway. If you’re not looking for an in-depth analysis of your site, this might be just enough information to make you happy and send you on your way. However, you’d be missing out on some of the tool’s (not-so-hidden) features that could help improve your mobile site even more.

You’ll notice in the shot above that next to the mobile friendliness rating are ratings for mobile and desktop speed. Although Express scored high in the overall rating, they didn’t fare so well when it came to speed. This is just one example of the added information you get with the newest version of this tool.

If this was my site and my ratings, the first thing I’d work on fixing would be the speed of my site on both mobile and desktop.

One of the big differences between the old version of the tool and the updated version is that you now have access to this added information; in the past, all the tool said was whether or not your site was mobile friendly. Now, users have much more detailed information in the form of a 0-100-scale rating that discusses mobile friendliness, but also mobile and desktop speed.

Additional Features of the Google Mobile Testing Tool

Besides the new rating scale and the fact that you can get all three scores on one screen, Google has made another big change; they give you the option to have a comprehensive report sent to you that you can share with your team. If you click that button, a screen will appear that looks something like this:

google3

Google is nice enough to give you some mobile tips in an easy-to-read, easy-to-understand format even before receiving your free report (which they promise will arrive within 24 hours). Here is a report that I had sent to me for amandadisilvestro.com:

Screen Shot 2016-07-24 at 4.46.11 PM

 

mobile tool

You can see that in the area where I scored poorly (mobile speed), Google tells me exactly what needs to be fixed. They even provide links that lead to technical support in case the team needs help fixing the problem. They’re pretty much taking the guesswork out of the whole thing, so truly optimizing a mobile site has never been this painless.

Possible Critiques of the Google Mobile Testing Tool

I do think it is interesting, and worth noting, that while there is a ton of information out there about how the tool works and how to use it, there isn’t a lot of information that explains the algorithm the tool uses in order to determine the three different ratings. All I was really able to determine was that it looks at things like CSS, HTML, scripts, and images and then evaluates how quickly (or slowly) it takes for your website to load.

So how do they determine where your site falls on the rating scale? Perhaps by how long it takes for your site to load past the 3-second mark, which they claim is the attention span people have for waiting on mobile sites. (Ironically enough, it takes longer than three seconds for Google’s site to complete its test.)

I became even more skeptical after coming across this article by Search Engine Watch. They did some more extensive tests and found that their site, along with Forbes, and many other sites, all received “poor” ratings for both mobile and desktop speed. In fact, the only site they could find that received good scores in all three categories was Google. When I did the test myself, I received the same results, as you can see below:

Screen Shot 2016-07-24 at 4.50.21 PM

I hate to be a skeptic and go around touting a conspiracy theory, but what’s up with that Google? Are all the other mobile sites out there really inferior to yours, or are you just trying to drum up business for your new tool?

Regardless of the critiques or potential fishy-ness happening, the tool is easy to use and is something I would recommend. After all, it’s free, and if you truly don’t believe what you see, then you don’t have to make any changes. If nothing else, it gets you thinking.

What do you think of Google’s new tool? Was your site able to score a “good” in more than one category? Comment in the section below and let us know what you think.

https://www.searchenginejournal.com/dont-miss-use-googles-new-mobile-testing-tool/168899/

Categorized in Search Engine

Google, the world's leading search engine, has been unfairly subpoenaed by the Department of Justice, as part of a lawsuit to which it is not a party.

Federal prosecutors have asked Google, Microsoft, Yahoo and AOL to turn over logs showing search terms entered by search engine users, and a list of websites indexed by the portals' search engines.

Google has refused the Department of Justice's demand for this data, which the government wants for an upcoming lawsuit concerning the 1998 Child Online Protection Act. Two years ago the US Supreme Court issued an injunction preventing enforcement of the Act. The DoJ wants that injunction reversed; the ACLU has filed suit to prevent any such reversal. The trial date is set for June 12th,2006.

Federal prosecutors are not asking for any specific information that concerns privacy advocates, or for any personal or private information about Google's users, but Google asks why it should share its data, and how it became a party to this lawsuit in the first place. In this writer's opinion, Federal prosecutors are clearly overreaching in subpoenaing Google for this information.

The defendant in the COPA case - - the government - - would like to use the million website addresses to simulate the World Wide Web to test the effectiveness of some of the filtering programs it is developing. Leaving aside Google's motives in refusing to deliver this information, the question is, should governments defend their cases by using their might to lean on third party businesses and private entities? And if companies do not comply with such requests, should governments invoke their subpoena powers?

Territorial Rights Management (TRM) and Digital Rights Management (DRM) are some of the technologies that, when coupled with encryption, security, user authentication and credit card validation, could most certainly address the concerns set forth in the COPA law and the reasons for the Court's injunction against its execution.

Similarly, given a little time, technological innovators could invent solutions that do not undermine the First and Fifth Amendments: another reason for courts to keep this law from being enforced until the industry can provide technological tools based on TRM and user authentication that will help parents protect their children from problematic websites and content. Such issues are explored in ABI Research's study Conditional Access & Digital Rights Management, which forms part of the Digital Media Distribution and Management Research Service.

http://www.hometoys.com/article/2016/07/raspberry-pi-and-matlab-based-3d-scanner/8541

Categorized in Search Engine

What are business attributes, and why should local businesses care? Columnist Adam Dorfman explores.

When checking into places on Google Maps, you may have noticed that Google prompts you to volunteer information about the place you’re visiting. For instance, if you check into a restaurant, you might be asked whether the establishment has a wheelchair-accessible entrance or whether the location offers takeout. There’s a reason Google wants to know: attributes.

Attributes consist of descriptive content such as the services a business provides, payment methods accepted or the availability of free parking — details that may not apply to all businesses. Attributes are important because they can influence someone’s decision to visit you.

Google wants to set itself up as a go-to destination of rich, descriptive content about locations, which is why it crowdsources business attributes. But it’s not the only publisher doing so. For instance, if you publish a review on TripAdvisor or Yelp, you’ll be asked a similar battery of questions but with more details, such as whether the restaurant is appropriate for kids, allows dogs, has televisions or accepts bitcoins.

Many of these publishers are incentivizing this via programs like Google’s Local Guides, TripAdvisor’s Badge Collections, and Yelp’s Elite Squad because having complete, accurate information about locations makes each publisher more useful. And being more useful means attracting more visitors, which makes each publisher more valuable.

android crowdsource
   

It’s important that businesses manage their attributes as precious location data assets, if for no other reason than that publishers are doing so. I call publishers (and aggregators who share information with them) data amplifiers because they amplify a business’s data across all the places where people conduct local searches. If you want people to find your business and turn their searches into actual in-store visits, you need to share your data, including detailed attributes, with the major data amplifiers.

Many businesses believe their principal location data challenge is ensuring that their foundational data, such as their names, addresses and phone numbers, are accurate. I call the foundational data “identities,” and indeed, you need accurate foundational data to even be considered when people search for businesses. But as important as they are — and challenging to manage — identities solve for only one-half of the search challenge. Identities ensure visibility, but you need attributes to turn searches into business for your brand.

Attributes are not new, but they’ve become more important because of the way mobile is rapidly accelerating the purchase decision. According to seminal research published by Google, mobile has given rise to “micro-moments,” or times when consumers use mobile devices to make quick decisions about what to do, where to go or what to buy.

Google noted that the number of “near me” searches (searches conducted for goods and services nearby) have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices. As Google’s Matt Lawson wrote:

With a world of information at their fingertips, consumers have heightened expectations for immediacy and relevance. They want what they want when they want it. They’re confident they can make well-informed choices whenever needs arise. It’s essential that brands be there in these moments that matter — when people are actively looking to learn, discover, and/or buy.

Attributes encourage “next moments,” or the action that occurs after someone has found you during a micro-moment. Google understands that businesses failing to manage their attributes correctly will drop off the consideration set when consumers experience micro-moments. For this reason, Google prompts users to complete attributes about businesses when they check into a location on Google Maps.

At the 2016 Worldwide Developers Conference, Apple underscored the importance of attributes when the company rolled out a smarter, more connected Siri that makes it possible for users to create “next moments” faster by issuing voice commands such as “Siri, find some new Italian restaurants in Chicago, book me dinner, and get me an Uber to the restaurant.” In effect, Siri is a more efficient tool for enabling next moments, but only for businesses that manage the attributes effectively.

And with its recently released Google My Business API update to version 3.0, Google also gave businesses that manage offline locations a powerful competitive weapon: the ability to manage attributes directly. By making it possible to share attributes on your Google My Business page, Google has not only amplified its own role as a crucial publisher of attributes but has also given businesses an important tool to take control of your own destiny. It’s your move now.

http://searchengineland.com/google-mining-local-business-attributes-252283

Categorized in Business Research

In late 2015, JR Oakes and his colleagues undertook an experiment to attempt to predict Google ranking for a given webpage using machine learning. What follows are their findings, which they wanted to share with the SEO community.

Machine learning is quickly becoming an indispensable tool for many large companies. Everyone has, for sure, heard about Google’s AI algorithm beating the World Champion in Go, as well as technologies like RankBrain, but machine learning does not have to be a mystical subject relegated to the domain of math researchers. There are many approachable libraries and technologies that show promise of being very useful to any industry that has data to play with.

Machine learning also has the ability to turn traditional website marketing and SEO on its head. Late last year, my colleagues and I (rather naively) began an experiment in which we threw several popular machine learning algorithms at the task of predicting ranking in Google. We ended up with an assembly that achieved 41 percent true positive and 41 percent true negative on our data set.

In the following paragraphs, I will take you through our experiment, and I will also discuss a few important libraries and technologies that are important for SEOs to begin understanding.

Our experiment

Toward the end of 2015, we started hearing more and more about machine learning and its promise to make use of large amounts of data. The more we dug in, the more technical it became, and it quickly became clear that it would be helpful to have someone help us navigate this world.

About that time, we came across a brilliant data scientist from Brazil named Alejandro Simkievich. The interesting thing to us about Simkievich was that he was working in the area of search relevance and conversion rate optimization (CRO) and placing very well for important Kaggle competitions. (For those of you not familiar, Kaggle is a website that hosts machine learning competitions for groups of data scientists and machine learning enthusiasts.)

Simkievich is the owner of Statec, a data science/machine learning consulting company, with clients in the consumer goods, automotive, marketing and internet sectors. Lots of Statec’s work had been focused on assessing the relevance of e-commerce search engines. Working together seemed a natural fit, since we are obsessed with using data to help with decision-making for SEO.

We like to set big hairy goals, so we decided to see if we could use the data available from scraping, rank trackers, link tools and a few more tools, to see if we could create features that would allow us to predict the rank of a webpage. While we knew going in that the likelihood of pulling it off was very low, we still pushed ahead for the opportunity for an amazing win, as well as the chance to learn some really interesting technology.

The data

Fundamentally, machine learning is using computer programs to take data and transform it in a way that provides something valuable in return. “Transform” is a very loosely applied word, in that it doesn’t quite do justice to all that is involved, but it was selected for the ease of understanding. The point here is that all machine learning begins with some type of input data.

(Note: There are many tutorials and courses freely available that do a very good job of covering the basics of machine learning, so we will not do that here. If you are interested in learning more, Andrew Ng has an excellent free class on Coursera here.)

The bottom line is that we had to find data that we could use to train a machine learning model. At this point, we didn’t know exactly what would be useful, so we used a kitchen-sink approach and grabbed as many features as we could think of. GetStat and Majestic were invaluable in supplying much of the base data, and we built a crawler to capture everything else.

Image of data used for analysis

Our goal was to end up with enough data to successfully train a model (more on this later), and this meant a lot of data. For the first model, we had about 200,000 observations (rows) and 54 attributes (columns).

A little background

As I said before, I am not going to go into a lot of detail about machine learning, but it is important to grasp a few points to understand the next section. In total, much of the machine learning work today deals with regression, classification and clustering algorithms. I will define the first two here, as they were relevant to our project.

Image showing the difference between classification and regression algorithms

  • Regression algorithms are normally useful for predicting a single number. If you needed to create an algorithm that predicted a stock price based on features of stocks, you would select this type of model. These are called continuous variables.
  • Classification algorithms are used to predict a member of a class of possible answers. This could be a simple “yes or no” classification, or “red, green or blue.” If you needed to predict whether an unknown person was male or female from features, you would select this type of model. These are called discrete variables.

Machine learning is a very technical space right now, and much of the cutting-edge work requires familiarity with linear algebra, calculus, mathematical notation and programming languages like Python. One of the items that helped me understand the overall flow at an approachable level, though, was to think of machine learning models as applying weights to the features in the data you give it. The more important the feature, the stronger the weight.

When you read about “training models,” it is helpful to visualize a string connected through the model to each weight, and as the model makes a guess, a cost function is used to tell you how wrong the guess was and to gently, or sternly, pull the string in the direction of the right answer, correcting all the weights.

The part below gets a bit technical with terminology, so if it is too much for you, feel free to skip to the results and takeaways in the final section.

Tackling Google rankings

Now that we had the data, we tried several approaches to the problem of predicting the Google ranking of each webpage.

Initially, we used a regression algorithm. That is, we sought to predict the exact ranking of a site for a given search term (e.g., a site will rank X for search term Y), but after a few weeks, we realized that the task was too difficult. First, a ranking is by definition a characteristic of a site relative to other sites, not an intrinsic characteristic of the site (as, for example, word count). Since it was impossible for us to feed our algorithm with all sites ranked for a given search term, we reformulated the problem.

We realized that, in terms of Google ranking, what matters most is whether a given site ends up on the first page for a given search term. Thus, we re-framed the problem: What if we try to predict whether a site will end up in the top 10 sites ranked by Google for a certain search term? We chose top 10 because, as they say, you can hide a dead body on page two!

From that standpoint, the problem turns into a binary (yes or no) classification problem, where we have only two classes: a) the site is a top 10 site, or b) the site is not a top 10 site. Furthermore, instead of making a binary prediction, we decided to predict the probability that a given site belongs to each class.

Later, to force ourselves to make a clear-cut decision, we decided on a threshold above which we predict that a site will be top 10. For example, if we predict that the threshold is 0.85, then if we predict that the probability of a site being in the top 10 is higher than 0.85, we go ahead and predict that the site will be in the top 10.

To measure the performance of the algorithm, we decided to use a confusion matrix.

The following chart provides an overview of the entire process.

Image visually showing our machine learning process

Cleaning the data

We used a data set of 200,000 records, including roughly 2,000 different keywords/search terms.

In general, we can group the attributes we used into three categories:

  • Numerical features
  • Categorical variables
  • Text features

Numerical features are those that can take on any number within an infinite or finite interval. Some of the numerical features we used are ease of read, grade level, text length, average number of words per sentence, URL length, website load time, number of domains referring to website, number of .edu domains referring to website, number of .gov domains referring to website, Trust Flow for a number of topics, Citation Flow, Facebook shares, LinkedIn shares and Google shares. We applied a standard scalar (multiplier) to these features to center them around the mean, but other than that, they require no further preprocessing.

categorical variable is one which can take on a limited number of values, with each value representing a different group or category. The categorical variables we used include most frequent keywords, as well as locations and organizations throughout the site, in addition to topics for which the website is trusted. Preprocessing for these features included turning them into numerical labels and subsequent one-hot encoding.

Text features are obviously composed of text. They include search term, website content, title, meta-description, anchor text, headers (H3, H2, H1) and others.

It is important to highlight that there is not a clear-cut difference between some categorical attributes (e.g., organizations mentioned on the site) and text, and some attributes indeed switched from one category to the other in different models.

Feature engineering

We engineered additional features, which have correlation with rank.

Most of these features are Boolean (true or false), but some are numerical. An example of a Boolean feature is the exact search term included on the website text, whereas a numerical feature would be how many of the tokens in the search term are included in the website text.

Below are some of the features we engineered.

Image showing boolean and quantitative features that were engineered

Run TF-IDF

To pre-process the text features, we used the TF-IDF algorithm (term-frequency, inverse document frequency). This algorithm views every instance as a document and the entire set of instances as a corpus. Then, it assigns a score to each term, where the more frequent the term is in the document and the less frequent it is in the corpus, the higher the score.

We tried two TF-IDF approaches, with slightly different results depending on the model. The first approach consisted of concatenating all the text features first and then applying the TF-IDF algorithm (i.e., the concatenation of all text columns of a single instance becomes the document, and the set of all such instances becomes the corpus). The second approach consisted of applying the TF-IDF algorithm separately to each feature (i.e., every individual column is a corpus), and then concatenating the resulting arrays.

The resulting array after TF-IDF is very sparse (most columns for a given instance are zero), so we applied dimensionality reduction (single value decomposition) to reduce the number of attributes/columns.

The final step was to concatenate all resulting columns from all feature categories into an array. This we did after applying all the steps above (cleaning the features, turning the categorical features into labels and performing one-hot encoding on the labels, applying TF-IDF to the text features and scaling all the features to center them around the mean).

Models and ensembles

Having obtained and concatenated all the features, we ran a number of different algorithms on them. The algorithms that showed the most promise are gradient boosting classifier, ridge classifier and a two-layer neural network.

Finally, we assembled the model results using simple averages, and thus we saw some additional gains as different models tend to have different biases.

Optimizing the threshold

The last step was to decide on a threshold to turn probability estimations into binary predictions (“yes, we predict this site will be top 10 in Google” or “no, we predict this site will not be top 10 in Google”). For that, we optimized a cross-validation set and then used the obtained threshold on a test set.

Results

The metric we thought would be the most representative to measure the efficacy of the model is a confusion matrix. A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known.

I am sure you have heard the saying that “a broken clock is right twice a day.” With 100 results for every keyword, a random guess would correctly predict “not in top 10” 90 percent of the time. The confusion matrix ensures the accuracy of both positive and negative answers. We obtained roughly a 41-percent true positive and 41-percent true negative in our best model.

Image showing confusion matrix of our best model

Another way of visualizing the effectiveness of the model is by using an ROC curve. An ROC Curveis “a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.” The non-linear models used in the ensemble were XGBoost and a neural network. The linear model was logistic regression. The ensemble plot indicated a combination of the linear and non-linear models.

Image of ROC curve generated by our model

XGBoost is short for “Extreme Gradient Boosting,” with gradient boosting being “a machine learning technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.”

The chart below shows the relative contribution of the feature categories to the accuracy of the final prediction of this model. Unlike neural networks, XGBoost, along with certain other models, allow you to easily peek into the model to tell the relative predictive weight that particular features hold.

Graph of predictive importance by feature category

We were quite impressed that we were able to build a model that showed predictive power from the features that we had given it. We were very nervous that our limitation of features would lead to the utter fruitlessness of this project. Ideally, we would have a way to crawl an entire site to gain overall relevance. Perhaps we could gather data on the number of Google reviews a business had. We also understood that Google has much better data on links and citations than we could ever hope to gather.

What we learned

Machine learning is a very powerful tool that can be used even if you do not understand fully the complexity of how it works. I have read many articles about RankBrain and the inability of engineers to understand how it works. This is part of the magic and beauty of machine learning. Similar to the process of evolution, in which life gains different features and some live and some die, the process of machine learning finds the way to the answer instead of being given it.

While we were happy with the results of our first models, it is important to understand that this was trained on a relatively small sample compared to the immense size of the internet. One of the key goals in building any kind of machine learning tool is the idea of generalization and operating effectively on data that has never been seen before. We are currently testing our model on new queries and will continue to refine.

The largest takeaway for me in this project was just starting to get a grasp on the immense value that machine learning has for our industry. A few of the ways I see it impacting SEO are:

  • Text generation, summarization and categorization. Think about smart excerpts for content and websites that potentially self-organize based on classification.
  • Never having to write another ALT parameter (See below).
  • New ways of looking at user behavior and classification/scoring of visitors.
  • Integration of new ways of navigating websites using speech and smart Q&A style content/product/recommendation systems.
  • Entirely new ways of mining analytics and crawled data to give insights into visitors, sessions, trends and potentially visibility.
  • Much smarter tools in distribution of ad channels to relevant users.

This project was more about learning for us rather than accomplishing a holy grail (of sorts). Much like the advice I give to new developers (“the best learning happens while doing”), it is important to get your hands dirty and start training. You will learn to gather, clean and organize data, and you’ll familiarize yourself with the ins and outs of various machine learning tools.

Much of this is familiar to more technical SEOs, but the industry also is developing tools to help those who are not as technically inclined. I have compiled a few resources below that are of interest in understanding this space.

Recent technologies of interest

It is important to understand that the gross majority of machine learning is not about building a human-level AI, but rather about using data to solve real problems. Below are a few examples of recent ways this is happening.

NeuralTalk2

NeuralTalk2 is a Torch model by Andrej Karpathy for generating natural language descriptions of given images. Imagine never having to write another ALT parameter again and having a machine do it for you. Facebook is already incorporating this technology.

Microsoft Bots and Alexa

Researchers are mastering speech processing and are starting to be able to understand the meaning behind words (given their context). This has deep implications to traditional websites in how information is accessed. Instead of navigation and search, the website could have aconversation with your visitors. In the instance of Alexa, there is no website at all, just the conversation.

Natural language processing

There is a tremendous amount of work going on right now in the realm of translation and content semantics. It goes far beyond traditional Markov chains and n-gram representations of text. Machines are showing the initial hints of abilities to summarize and generate text across domains. “The Unreasonable Effectiveness of Recurrent Neural Networks” is a great post from last year that gives a glimpse of what is possible here.

Home Depot search relevance competition

Home Depot recently sponsored an open competition on Kaggle to predict the relevance of their search results to the visitor’s query. You can see some of the process behind the winning entries onthis thread.

How to get started with machine learning

Because we, as search marketers, live in a world of data, it is important for us to understand new technologies that allow us to make better decisions in our work. There are many places where machine learning can help our understanding, from better knowing the intent of our users to which site behaviors drive which actions.

For those of you who are interested in machine learning but are overwhelmed with the complexity, I would recommend Data Science Dojo. There are simple tutorials using Microsoft’s Machine Learning Studio that are very approachable to newbies. This also means that you do not have to learn to code prior to building your first models.

If you are interested in more powerful customized models and are not afraid of a bit of code, I would probably start with listening to this lecture by Justin Johnson at Stanford, as it goes through the four most common libraries. A good understanding of Python (and perhaps R) is necessary to do any work of merit. Christopher Olah has a pretty great blog that covers a lot of interesting topics involving data science.

Finally, Github is your friend. I find myself looking through recent repos added to see the incredibly interesting projects people are working on. In many cases, data is readily available, and there are pretrained models that perform certain tasks very well. Looking around and becoming familiar with the possibilities will give you some perspective into this amazing field.

http://searchengineland.com/experiment-trying-predict-google-rankings-253621

Categorized in Search Engine

Everybody loves Google. Searching with Google has saved us many troubles in the past and will continue to save us. Whether it’s an assignment or a project, a research or dissertation, or you’re just trying to cheat on some Radio call-in game show, searching with Google has hardly ever let us down. But did you know that Google is a whole lot more than just a search engine?

Calculate anything

 

Call up Google (on your computer or mobile, it doesn’t matter) and type in your calculation; any at all. The basic arithmetic operations can be applied by using  * + – / for multiplication, addition, subtraction and division respectively. For example type in 5*2+4 in the search box and enter. You get the answer instantly!

google-calculator

Google can also do advanced scientific calculations (you can delete your phone’s calculator app now). Go ahead, try  100*3.14-cos(83) or maybe 5*9+(sqrt 10)^3, see what you get

Translate anything

Google can translate to and from English for a vast range of languages, including Yoruba, Igbo and Hausa. For example, if I want to find out what ‘Airplane’ is in French, I just type in Translate ‘Airplane’ to French (with the quotes and not case-sensitive) and I get my answer.

google-translate

To translate “have you eaten?” to Hausa, just type translate ‘have you eaten’ to Hausa (without the quotes of course).

google-translate_2

Google as a Dictionary

We try not to bombard you with heavy words on here but, if you ever came across a word you didn’t know, just type ‘define:’ before the word into Google search and you get a definition instantly. For example to find out the meaning of say, Idiosyncrasy, just type define: idiosyncrasy.

google-dictionary

You can now burn you dictionary. It is outdated.

Google as a clock

Do you have a close friend or lover in obodo oyinbo and want to get a sense of when they’ll be awake to talk? With Google, you can see what time it is anywhere in the world, just type “time” and the city or country. For example to know the time in Dubai right now, just type time Dubai in Google search.

Convert Money and Units

By searching “[amount original currency code] in [ new currency code]”, you can find out the official exchange rate between any two currencies. For example, typing 1000 ngn in usd inside the search box will give you 1000 naira’s equivalent in dollars. That simple!


converter

For a full list of currency codes, go here.

You can also convert units – inches to metres, litres to gallons etc… using the same format but replacing ‘in’ with ‘to’. For example typing 20 litres to gallons will give you the exact answer. Make sure you get the spellings right though (and in full).

google-conversions

 

 Search Within A Specific Site

If you want to find out information from within a particular site (you probably know it’s there but you can’t remember where you last saw it, and you don’t want to have to rummage the site again for endless hours), just put site:[website url] after your search query in Google.

For example to find out all info about Boko Haram published by This Day newspaper site, just typeBoko haram site:thisdayonline.cominside search. You could also, Buhari site:facebook.com. You didn’t get this idea from me o!

Now that you’ve become a certified pro Google user, don’t be stingy, share with others so that they can be in the know too.

 

Source: https://techpoint.ng/2016/05/10/how-to-google-search/

 

Categorized in Search Engine
Page 4 of 4

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.