fbpx

Web Directories

Jay Harris

Jay Harris

When we asked Google's Gary Illyes about Penguin, he said SEOs should focus on where their links come from for the most part, but they have less to worry about now that Penguin devalues those links, as opposed to demoting the site.

Here’s another nugget of information learned from the A conversation with Google’s Gary Illyes (part 1) podcast at Marketing Land, our sister site: Penguin is coined a “web spam” algorithm, but it indeed focuses mostly on “link spam.” Google has continually told webmasters that this is a web spam algorithm, but every webmaster and SEO focuses mostly around links. Google’s Gary Illyes said their focus is right, that they should be mostly concerned with the links when tackling Penguin issues.

Gary Illyes made a point to clarify that it isn’t just the link, but rather the “source site” the link is coming from. Google said Penguin is based on “the source site, not on the target site.” You want your links to come from quality sources, as opposed to a low-quality source.

One example Gary revealed was his looking at a negative SEO case submitted to him, and he said the majority of the links were on “empty profile pages, forum profile pages.” When he looked at those links, the new Penguin algorithm was already “discounting” those links, devaluing those links.

“The good thing is that it is discounting the links, basically ignoring the links instead of the demoting,” Gary Illyes added. 

Barry Schwartz: You also talked about web spam versus link spam and Penguin. I know John Mueller specifically called it out again, in the original Penguin blog post that you had posted, that you said this is specifically a web spam algorithm. But every SEO that I know focuses just on link spam regarding Penguin. And I know when you initially started talking about this on our podcast just now, you said it’s mostly around really really bad links. Is that accurate to say when you talk about Penguin, [that] typically it’s around really, really bad links and not other types of web spam?

Gary Illyes: It’s not just links. It looks at a bunch of different things related to the source site. Links is just the most visible thing and the one that we decided to talk most about because we already talked about about links in general.

But it looks at different things on the source site, not on the target site, and then makes decisions based on those special signals.

I don’t actually want to reveal more of those spam signals because I think they would be pretty, I wouldn’t say easy to spam, but they would easy to mess with. And I really don’t want that.

But there are quite a few hints in the original, the old Penguin article.

Barry Schwartz: Can you mention one of those hints that is in the article?

Gary Illyes: I would rather not. I know that you can make pretty good assumptions. So I would just let you make assumptions.

Danny Sullivan: If you were making assumptions, how would you make those assumptions?

Gary Illyes: I try not to make assumptions. I try to to make decisions based on data.

Barry Schwartz: Should we be focusing on a link spam aspect of it for Penguin? Obviously, focus on all the “make best quality sites,” yada-yada-yada, but we talk about Penguin as reporters, and we’re telling people that SEOs are like Penguin specialists or something like that — they only focus on link spam — Is that wrong? I mean should they?

Gary Illyes: I think that’s the the main thing that they should focus on.

See where it is coming from, and then make a decision based on the source site — whether they want that link or not.

Well, for example, like I was looking at the negative SEO case just the yesterday or two days ago. And basically, the content owner played hundreds of links on empty profile pages, forum profile pages. Those links with the new Penguin were discounted. But like if you looked at the page, it was pretty obvious that the links were placed there for a very specific reason, and that’s to game the ranking algorithms. But not just Google’s but any other ranking algorithm that uses links. Like if you look at a page, you can make a pretty easy decision on whether to disavow or remove that link or not. And that’s what Penguin is doing. It’s looking at signals on the source page. Basically, what kind of page it is, what could be the purpose of that link ,and then make a decision based on that whether to discount those things or not.

The good thing is that it is discounting the links, basically ignoring the links instead of the demoting.

So in general, unless people are overdoing it, it’s unlikely that they will actually feel any sort of effect by placing those. But again, if they are overdoing it, then the manual actions team might take a deeper look.

You can listen to part one of the interview at Marketing Land.

Source : searchengineland

Thursday, 13 October 2016 14:12

Why more women are in top internet jobs

Ant Financial Service Group, the financial arm of e-commerce giant Alibaba, recently appointed Eric Jing as chief executive.

Lucy Peng, who has been at the helm of Ant Financial since its founding two years ago, will remain chairman.

The move is said to pave the way for a Hong Kong initial public offering in 2017.

Peng, 43, a former economics professor, is one of 18 first-generation team members of Alibaba.

She has served as Alibaba’s chief people officer (CPO) for a long time before heading Ant Financial in 2013.

As the CPO, Peng focused on how to instill the right corporate culture and values in the group’s massive staff of nearly 40,000 people.

Peng is widely regarded as No. 3 in Alibaba behind Jack Ma and Group vice chairman Cai Chongxin.

Previously, Alibaba faced a serious credibility issue caused by fake goods and wrongdoing by merchants on its website.

Peng was the key person behind the effort to resolve the crisis.

Peng is well known for her patience, attention to detail and excellent communication skills.

It is said that she would e-mail back and forth with technical staff a hundred times over a minor product or service.

In a letter to employees announcing the personnel change, Ma wrote that Peng has demonstrated “outstanding leadership using unique insight as a woman”, and was a “rock-steady presence in the face of changing times”.

In the past, the internet world used to be dominated by men, who are typically better at science and engineering.

In fact, most programmers are male. The founder of the world’s top 10 internet firms are all men.

However, more women are serving key positions as CEO, COO, or CPO in internet giants these days.

Peng, for instance, joined the ranks of the most powerful women in internet giants, including Facebook chief operating officer Sheryl Sandberg, YouTube CEO Susan Wojcicki and Yahoo CEO Marissa Mayer.

In the early stage of the internet industry, technology was at the core of competitiveness.

A company that can develop a unique technology can often beat its rivals. Engineers and program developers were assigned the top posts.

Nowadays, all internet giants have cloud, big data, e-commerce, social networking platforms and artificial intelligence.

It’s not easy for ordinary customers to tell the difference. Brand image, company values and user experience are increasingly making the difference.

Women leaders usually do better in these areas.

Samsung’s Galaxy Note 7 saga is a good example.

The poor handling of the explosion cases of the new smartphone may in fact have something to do with Samsung’s all men leadership team.

The CEO, CFO, COO, as well as the nine-person board of the Korean tech giant are all male.

By contrast, its arch rival Apple has two women on the board. Angela Ahrendts, former Burberry CEO, now serves as Apple’s vice-president of retail and online stores.

Source : ejinsight.com

Even if your user downloads your app, which has app indexing deployed, Google will show them the AMP page over your app page.

At SMX East yesterday, Adam Greenberg, head of Global Product Partnerships at Google, gave a talk about AMP. He said during the question and answer time that AMP pages will override app deep links for the “foreseeable future.”

Last week, we covered how when Google began rolling out AMP to the core mobile results, Google quietly added to their changelog that AMP pages will trump app deep links. In short, that means when a user installs an app of a publisher, does a search on the mobile phone where the app resides and clicks on a link within the Google mobile results that could lead to the app opening up, instead, Google will show the AMP page — not the content within the app the user installed.

Google has made several large pushes with App Indexing through the years. These were incentives to get developers to add deep links and App Indexing to their apps — such as installing apps from the mobile results, app indexing support for iOS apps, a ranking boost for deploying app indexing, Google Search Console reporting and so much more.

But now, if your website has both deployed app indexing and AMP, your app indexing won’t be doing much for you to drive more visits to your native iOS or Android app.

Google told us they “have found that AMP helps us deliver” on a better user experience “because it is consistently fast and reliable.” Google added, “AMP uses 10x less data than a non-AMP page.” Google told us that “people really like AMP” and are “more likely to click on a result when it’s presented in AMP format versus non-AMP.”

Google also told us that they “support both approaches,” but “with AMP — and the ability to deliver a result on Google Search in a median time of less than a second — we know we can provide that reliable and consistently fast experience.”

Personally, as a publisher who has deployed virtually everything Google has asked developers to deploy — from specialized Google Custom Search features to authorshipapp indexingAMP,mobile-friendlyHTTPS and more — I find this a bit discouraging, to say the least.

I think if a user has downloaded the app, keeps the app on their device and consumes content within the app, that user would prefer seeing the content within the publisher’s app versus on a lightweight AMP page. But Google clearly disagrees with my personal opinion on this matter.

Original source of this article is searchengineland

Monday, 29 August 2016 04:13

Mozilla invests in browser Cliqz

Mozilla made a strategic investment in Cliqz, maker of an iOS and Android browser with a built-in search engine, “to enable innovation of privacy-focused search experiences”.

Mark Mayo, SVP of Mozilla Firefox, said Cliqz’s products “align with the Mozilla mission. We are proud to help advance the privacy-focused innovation from Cliqz through this strategic investment in their company”.

Cliqz is based in Munich and is majority-owned by international media and technology company Hubert Burda Media.

The Cliqz for Firefox Add-on is already available as a free download. It adds to Firefox “an innovative quick search engine as well as privacy and safety enhancements such as anti-tracking”, said Mozilla.

Cliqz quick search is available in Cliqz’s browsers for Windows, Mac, Linux, Android and iOS. The desktop and iOS versions are built on Mozilla Firefox open source technology and offer built-in privacy and safety features.

Cliqz quick search is optimised for the German language and shows website suggestions, news and information to enable users to search quickly.

It claimed that while conventional search engines primarily work with data related to the content, structuring, and linking of websites, instead it works with statistical data on actual search queries and website visits.

It has developed a technology capable of collecting this information and then building a web index out of it, something it calls the ‘Human Web’.

What’s more, Cliqz’s “privacy-by-design” architecture technology guarantees that no personal data or personally identifiable information is transmitted or saved on its servers.

Jean-Paul Schmetz, founder and managing director at Cliqz, said Mozilla is the ideal company to work with because both parties believe in an open internet where people have control over their data.

“Data and search are our core competencies and it makes us proud to contribute our search and privacy technologies to the Mozilla ecosystem,” he said.

Source : http://www.mobileworldlive.com/apps/news-apps/mozilla-invests-in-browser-cliqz/

We’re all a bit worried about the terrifying surveillance state that becomes possible when you cross omnipresent cameras with reliable facial recognition — but a new study suggests that some of the best algorithms are far from infallible when it comes to sorting through a million or more faces.

The University of Washington’s MegaFace Challenge is an open competition among public facial recognition algorithms that’s been running since late last year. The idea is to see how systems that outperform humans on sets of thousands of images do when the database size is increased by an order of magnitude or two.

See, while many of the systems out there learn to find faces by perusing millions or even hundreds of millions of photos, the actual testing has often been done on sets like the Labeled Faces in the Wild one, with 13,000 images ideal for this kind of thing. But real-world circumstances are likely to differ.

“We’re the first to suggest that face recs algorithms should be tested at ‘planet-scale,'” wrote the study’s lead author, Ira Kemelmacher-Shlizerman, in an email to TechCrunch. “I think that many will agree it’s important. The big problem is to create a public dataset and benchmark (where people can compete on the same data). Creating a benchmark is typically a lot of work but a big boost to a research area.”

The researchers started with existing labeled image sets of people — one set consisting of celebrities from various angles, another of individuals with widely varying ages. They added noise to this signal in the form of “distractors,” faces scraped from Creative Commons licensed photos on Flickr.

They ran the test with as few as 10 distractors or as many as a million — essentially, the number of needles stayed the same but they piled on the hay.

megaface results

The results show a few surprisingly tenacious algorithms: The clear victor for the age-varied set is Google’s FaceNet, while it and Russia’s N-TechLab are neck and neck in the celebrity database. (SIAT MMLab, from Shenzhen, China, gets honorable mention.)

Conspicuously absent is Facebook’s DeepFace, which in all likelihood would be a serious contender. But as participation is voluntary and Facebook hasn’t released its system publicly, its performance on MegaFace remains a mystery.

Both leaders showed a steady decline as more distractors were added, although efficacy doesn’t fall off quite as fast as the logarithmic scale on the graphs makes it look. The ultra-high accuracy rate touted by Google in its FaceNet paper doesn’t survive past 10,000 distractors, and by the time there are a million, despite a hefty lead, it’s not accurate enough to serve much of a purpose.

Still, getting three out of four right with a million distractors is impressive — but that success rate wouldn’t hold water in court or as a security product. It seems we still have a ways to go before that surveillance state becomes a reality — that one in particular, anyway.

The researchers’ work will be presented a week from today at the Conference on Computer Vision and Pattern Recognition in Las Vegas.

Source : https://techcrunch.com/2016/06/23/facial-recognition-systems-stumble-when-confronted-with-million-face-database/

Ever wonder what you would look like with long, wavy hair? I think you’d look great. But how can you try on a few looks without spending a fortune at the salon, or hours in photoshop? I’m glad you asked. All you need is a selfie and Dreambit, the face-swapping search engine.

The system analyzes the picture of your face and determines how to intelligently crop it to leave nothing but your face. It then searches for images matching your search term — curly hair, for example — and looks for “doppelganger sets,” images where the subject’s face is in a similar position to your own.

A similar process is done on the target images to mask out the faces and intelligently put your own in their place — and voila! You with curly hair, again and again and again. It’s a bit like that scene in Being John Malkovich. Just as creepy depending on what face you’re putting in what situation. Keri Russell looks great in every style, though, as the diagram below shows.

faceswap proces

The process by which faces are detected, masked, and replaced.

 

It’s not limited to hairstyles, either: put yourself in a movie, a location, a painting — as long as there’s a similarly positioned face to swap yours with, the software can do it. A few facial features, like beards, make the edges of the face difficult to find, however, so you may not be able to swap with Rasputin or Gandalf.

Dreambit is the brainchild of Ira Kemelmacher-Shlizerman, a computer vision researcher at the University of Washington (she also does interesting work in facial recognition and augmented reality). And while it is fun and silly to play with, it could have more serious applications.

Kemelmacher-Shlizerman has also created systems that do automated age progression, something that can be useful in missing persons cases.

“With missing children, people often dye their hair or change the style so age-progressing just their face isn’t enough,” she said in a UW news release. “This is a first step in trying to imagine how a missing person’s appearance might change over time.”

In an email to TechCrunch, Kemelmacher-Shlizerman noted that the software is still very much in beta mode and as such can’t exactly be used by the FBI.

Source :https://techcrunch.com/2016/07/21/this-amazing-search-engine-automatically-face-swaps-you-into-your-image-results/

Thursday, 11 August 2016 09:13

Wikipedia Search Engine WikiSeek Launches

Palo Alto based startup SearchMe has kept a low profile since being founded in March 2005. The company, which has 17 employees and raised $5 million from Sequoia Capital over two rounds, will launch a number of what founder Randy Adams calls “long tail search engines” in the near future. The first product they are launching is WikiSeek, which went live about an hour ago and will be officially announced on Wednesday.

WikiSeek is a search engine that has indexed only Wikipedia sites, plus sites that are linked to from Wikipedia. It serves two purposes. First, it is a much better Wikipedia search engine than the one on Wikipedia (and has been built with Wikipedia’s assistance and permission). Second, the fact that it also indexes sites that are linked to from Wikipedia means that, presumably, it will return only very high quality results and very little spam. It won’t show every relevant result to a query, but it will certainly give a good overview of a subject without all the mess.

The search results also include a tag cloud which contains Wikipedia categories containing the search term. Results can be quickly filtered by clicking on one of those categories (see screen shot, click for larger view). The first three results of a query are always Wikipedia content (unless there are not three results) and are shaded blue. The remaining results are below the shaded area.

In addition to the search engine, WikiSeek has two additional tools – a search plugin for FireFox, IE7 and Opera, and a really useful greasemonkey-like Firefox extension that will change the way Wikipedia looks on that browser by adding a “WikiSearch” button to the search box (see screen shot below). Click that button and see WikiSeek’s Wikipedia-only results. It’s faster and better than the results Wikipedia returns through its native search feature.

SearchMe is donating “the majority” of revenue generated from advertising on WikiSeek to the Wikimedia Foundation. Adams told me earlier this evening that WikiSearch is a showcase product for their technology, and they are happy to help the Wikipedia community as much as possible by donating those revenues.

Confusion with Wikiasari

WikiSeek will undoubtedly be confused with the much discussed Wikiasari search engine that was announced by Wikipedia founder Jimmy Wales last month. In fact, in our original post on Wikiasari, we included a screenshot that we later learned was not a prototype of Wikiasari. We corrected that post, and asked “the Wikisearch Screenshot Isn’t Wikiasari, So What Is It?” It was actually an early WikiSeek prototype, then called WikiSearch. Question answered.

wikiseekwp

https://techcrunch.com/2007/01/16/wikipedia-search-engine-wikiseek-launches/

Race, education, socioeconomic factors all linked to lower online participation

Recruiting minorities and poor people to participate in medical research always has been challenging, and that may not change as researchers turn to the internet to find study participants and engage with them online, new research suggests. A study led by researchers at Washington University School of Medicine in St. Louis concludes that unless explicit efforts are made to increase engagement among under-represented groups, current health-care disparities may persist.

In a study of 967 people taking part in genetic research, the investigators found that getting those individuals to go online to get follow-up information was difficult, particularly if study subjects didn’t have high school educations, had incomes below the poverty line or were African-American.

The new findings are available online July 28 in the journal Genetics in Medicine.

“We don’t know what the barriers are,” said first author Sarah M. Hartz, MD, PhD. “We don’t know whether some people don’t have easy access to the internet or whether there are other factors, but this is not good news as more and more research studies move online because many of the same groups that have been under-represented in past medical research would still be missed going forward.”

Hartz and her colleagues offered participants detailed information about their ancestry as part of genetic research to understand DNA variations linked to smoking behavior and nicotine addiction. Some 64 percent of the people in the study answered a survey question stating that they were either “very interested” or “extremely interested” in that information, but despite repeated attempts to get the subjects to view those results online, only 16 percent actually did.

The numbers fell to 10 percent or lower among people with low incomes and no high school diplomas, as well as among study subjects who were African-American. Such groups traditionally have been under-represented in medical research studies.

“This is particularly relevant now because of President Obama’s Precision Medicine Initiative,” said Hartz, an assistant professor of psychiatry.

The project seeks to recruit 1 million people and analyze their DNA to understand risk factors related to a variety of diseases. Ultimately, the project seeks to develop personalized therapies tailored to individual patients.

“Our results suggest that getting people to participate in such online registries is going to be a challenge, particularly if they live below the poverty line, don’t have high school diplomas or are African-American,” Hartz said.

Because 84 percent of American adults now use the internet and 68 percent own smartphones, some researchers have believed that traditional barriers to study recruitment — such as income, education and race — would be less important in the internet age.

In the Precision Medicine Initiative, researchers plan to collect information about diet, exercise, drinking and other behaviors, as well as about environmental risk factors, such as pollution. The study will allow participants to sign up by computer or smartphone, and recruitment aims to match the racial, ethnic and socioeconomic diversity of the United States. The idea is to make it as easy as possible to enroll, but Hartz’s findings suggest signing up on the internet won’t eliminate every barrier.

Hartz

As part of the Washington University study, the smokers who participated were given the opportunity to have their DNA analyzed by 23andMe, a personalized genetics company. The participants were able to receive reports detailing where their ancestors came from, based on 32 reference populations from around the world. That information was available through a secure, password-protected online account set up and registered by the individual through the 23andMe website.

Each subject received an e-mail from the researchers with instructions on how to log on to the 23andMe website and retrieve the information. After a few weeks, the researchers sent another e-mail to those who did not log on. Then, the researchers made phone calls, and, if the subjects still didn’t log onto the site, they were sent a letter in the mail.

Even after all of those attempts, only 45 percent of the European-American participants who had high school educations and lived above the poverty line ever looked at the information. Among African-American participants who graduated from high school and lived above the poverty line, only 18 percent logged onto the site.

“Our assumption that the internet and smartphone access have equalized participation in medical research studies doesn’t appear to be true,” Hartz said. “Now is the time to figure out what to do about it and how to fix it, before we get too far along in the Precision Medicine Initiative, only to learn that we’re leaving some under-represented groups of people behind.”

https://source.wustl.edu/2016/07/use-internet-medical-research-may-hinder-recruitment-minorities-poor/

What are business attributes, and why should local businesses care? Columnist Adam Dorfman explores.

When checking into places on Google Maps, you may have noticed that Google prompts you to volunteer information about the place you’re visiting. For instance, if you check into a restaurant, you might be asked whether the establishment has a wheelchair-accessible entrance or whether the location offers takeout. There’s a reason Google wants to know: attributes.

Attributes consist of descriptive content such as the services a business provides, payment methods accepted or the availability of free parking — details that may not apply to all businesses. Attributes are important because they can influence someone’s decision to visit you.

Google wants to set itself up as a go-to destination of rich, descriptive content about locations, which is why it crowdsources business attributes. But it’s not the only publisher doing so. For instance, if you publish a review on TripAdvisor or Yelp, you’ll be asked a similar battery of questions but with more details, such as whether the restaurant is appropriate for kids, allows dogs, has televisions or accepts bitcoins.

Many of these publishers are incentivizing this via programs like Google’s Local Guides, TripAdvisor’s Badge Collections, and Yelp’s Elite Squad because having complete, accurate information about locations makes each publisher more useful. And being more useful means attracting more visitors, which makes each publisher more valuable.

android crowdsource
   

It’s important that businesses manage their attributes as precious location data assets, if for no other reason than that publishers are doing so. I call publishers (and aggregators who share information with them) data amplifiers because they amplify a business’s data across all the places where people conduct local searches. If you want people to find your business and turn their searches into actual in-store visits, you need to share your data, including detailed attributes, with the major data amplifiers.

Many businesses believe their principal location data challenge is ensuring that their foundational data, such as their names, addresses and phone numbers, are accurate. I call the foundational data “identities,” and indeed, you need accurate foundational data to even be considered when people search for businesses. But as important as they are — and challenging to manage — identities solve for only one-half of the search challenge. Identities ensure visibility, but you need attributes to turn searches into business for your brand.

Attributes are not new, but they’ve become more important because of the way mobile is rapidly accelerating the purchase decision. According to seminal research published by Google, mobile has given rise to “micro-moments,” or times when consumers use mobile devices to make quick decisions about what to do, where to go or what to buy.

Google noted that the number of “near me” searches (searches conducted for goods and services nearby) have increased 146 percent year over year, and 88 percent of these “near me” searches are conducted on mobile devices. As Google’s Matt Lawson wrote:

With a world of information at their fingertips, consumers have heightened expectations for immediacy and relevance. They want what they want when they want it. They’re confident they can make well-informed choices whenever needs arise. It’s essential that brands be there in these moments that matter — when people are actively looking to learn, discover, and/or buy.

Attributes encourage “next moments,” or the action that occurs after someone has found you during a micro-moment. Google understands that businesses failing to manage their attributes correctly will drop off the consideration set when consumers experience micro-moments. For this reason, Google prompts users to complete attributes about businesses when they check into a location on Google Maps.

At the 2016 Worldwide Developers Conference, Apple underscored the importance of attributes when the company rolled out a smarter, more connected Siri that makes it possible for users to create “next moments” faster by issuing voice commands such as “Siri, find some new Italian restaurants in Chicago, book me dinner, and get me an Uber to the restaurant.” In effect, Siri is a more efficient tool for enabling next moments, but only for businesses that manage the attributes effectively.

And with its recently released Google My Business API update to version 3.0, Google also gave businesses that manage offline locations a powerful competitive weapon: the ability to manage attributes directly. By making it possible to share attributes on your Google My Business page, Google has not only amplified its own role as a crucial publisher of attributes but has also given businesses an important tool to take control of your own destiny. It’s your move now.

http://searchengineland.com/google-mining-local-business-attributes-252283

It’s surprising the internet works at all, given the age of its core software. The question is, can we catch it before it falls over?A panel of academic experts recently took part in a discussion on the future of the internet, and among other things highlighted its fragility, the ease with which it can be disrupted and its seeming resistance to change.

The weaknesses arise primarily from the fact that the internet comprises protocols for Layer 3 networking in the TCP/IP stack, invented many years ago.“There are a lot of challenges for the internet. We face daily problems,” said Timothy Roscoe, a professor at ETH, Zurich’s science, technology and mathematics university in Zurich.

 

“Most of what we do is at Layer 3, which is what makes the internet the internet.” However, new and incredibly popular services, such as YouTube, Netflix, Twitter and Facebook, have put pressures on these protocols.

 

New age, old protocols

Laurent Vanbever, an assistant professor at ETH, said: “There is a growing expectation by users that they can watch a 4K video on Netflix while someone else in the house is having a Skype call. They expect it to work but the protocols of the internet were designed in the 1970s and 1980s and we are now stretching the boundaries.”

The internet is often described as a network of networks. What makes these networks communicate with one another is BGP, the border gateway protocol. In essence, it’s the routing protocol used by internet service providers (ISP). It makes the internet work.

Roscoe said: “BGP is controlled by 60,000 people, who need to cooperate but also compete.” These people, network engineers at major ISPs, email each other to keep the internet running.

 

Routing for trouble

“When you visit a website, you really don’t know where your internet traffic goes,” said Roscoe. One would assume the route network traffic takes from a user’s computer to the server is the shortest possible.

 

But often, according to Roscoe, this is not the case. “I have seen network packets taking remarkably bizarre paths across the internet,” he said, and added that Pakistan was able to route all YouTube traffic through its servers, blocking the traffic, and effectively taking YouTube offline.Due to the way BGP and other protocols work, he said, there is “very little control over where traffic goes”. The question is why there is so little control.

Mark Handley, a professor of network systems at University College, London, said: “The internet is built out of a set of networks, where the operators have their own desires about what they want their network to do. Internet operators partially hide pricing and routing policy information, while needing to communicate with their neighbours.”

So, there’s a paradox, driven by competition to route traffic, and they [the operators] “are hiding who they will talk to, while trying to talk to each other”, said Handley.More recently, Edward Snowden’s revelations propelled into the public domain the ease with which the internet’s traffic can be routed and moved, highlighting the mass collection of internet data by US government spooks.

 

No need for internal change

Adrian Perrig, a network security professor at ETH Zurich, said his group at the university has been working on a new protocol and trying to tackle the internet’s secure routing challenge, in a way that is also more efficient than existing methods.

He said: “The architecture was started as an academic exercise, but we realised it is not that hard to deploy, as we do not need to change the internals of networks. We only need to change the points where different ISPs touch each other.”

So far, three major ISPs have begun deploying the new protocol along with a few banks ­– who want to gain greater transparency over their network packets. Perrig and his team are attempting to develop a protocol that can easily be deployed.

 

Too complex to change

Matt Brown, site reliability engineering head at Google, said: “A lot of the core protocols of the internet we rely on are very old. There are many improvements that need to be made to give us the level of robustness and security needed for the role the internet has in society.”But, he argued, it is still extremely hard to upgrade these protocols. “With a network you get network effects. You are effectively constrained by the lowest common denominator, like the last person who hasn’t upgraded who holds everybody back.”

For instance, he said the digital subscriber line (DSL) router provided by ISPs to people at home to allow an internet connecting may be four years old, yet it contains critical protocols.

“Getting new functionality to everyone in the world is a huge challenge,” he added. For instance, while the number of available IPv4 addresses has effectively run out, Google recently found that only 10% of the world’s traffic has upgraded to the next version, IPv6.There is a cost for ISPs if they want to make these changes. Moreover, as the slow rollout of IPv6 is revealing, many prefer to stick with old technology, simply because it can be made to work.

Source:  http://www.computerweekly.com/news/450296912/Network-Collapse-Why-the-internet-is-flirting-with-disaster

 

 

 

 

 

 

Page 5 of 7

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.