Web Directories

Olivia Russell

Olivia Russell

Remarkably gifted, knowledgeable and resourceful Market Research Analyst with over eight years experience in collecting and analyzing data to evaluate existing and potential product and service markets; identifying and monitoring competitors and researching market conditions and changes in the industry that may affect sales. I have done masters in marketing from Australian Institute of Business.

Life tool: we use search to guide many aspects of our everyday existence

An estimated 1.2 trillion searches are tapped in to Google every year. But while it may feel like Google’s primary purpose is to help us find our nearest dry cleaner, it is a hugely profitable business that dominates the search industry. Last year, its advertising revenue was $79.4bn.

Whether it’s Google, Yahoo or Bing, we use search engines to guide every aspect of our lives, whether we’re looking for a place to eat, sleep, or book flights.

And with fibre broadband being rolled out across the country by BT, searching online has never been easier, or quicker. But how can we use search engines to maximise results?

Avoid using words like ‘the’, ‘want’ and ‘at’

They’re known as stop words, and search operators ignore them, which can slow down your search. Cut them out and get results much faster. Ashley Williams, head of SEO at Meta Search Experts, also suggests adding a minus (-) sign before words you don’t want to appear in a search.

For example, if you wanted to search for apple pudding recipes but you didn’t want to bake apple crumble, you might search apple -crumble pudding, to avoid results showing you the dessert you don’t want.

Use quotation marks

If you’re searching for a specific phrase, add quotation marks around it so you don’t lose a word or two. For example, say you want to search for crumble and custard, but don’t want results for just crumble, or just custard, then add quotation marks around “crumble and custard”.

How to find the best recipes

There’s a more advanced way to find a phrase on a website than just pressing Ctrl F and hoping a word will pop up. This hack is especially good for recipes. Say you want to find a smashing treacle tart recipe, and you think you saw a brilliant recipe on BBC Good Food several months ago, Google will help you find this.

Search engine optimisation
Intelligent searching: SEO is a key factor to consider CREDIT: GETTY

Simply add the word “site” and a colon before the website address, then add your search terms after. Writing site:bbcgoodfood.co.uk and then treacle tart, will only bring up results for treacle tarts on bbcgoodfood.co.uk, says Tom Jeffries search manager at Bizdaq.

The power of ‘AND’

Omitting words such as “at” and “the” are all well and good, but using the word “AND”, as long as it’s written in capital letters, is a really essential search tool. For example, searching for “company name” and “employee name” would allow you to search for an employee at a company, helping you to refine a search.

Another way to search effectively for a name is by putting a forward slash between the first name and the surname. Rather than searching for Andrew Smith Doctor, which will bring up all of the Andrews and anyone with a Smith surname, adding a slash means only Andrew Smiths appear in the search engine.

Keep it relevant

Another nifty function is using the word AROUND to ensure searches bring up relevant content. Samuel Hill, marketing executive at Gorvins Solicitors in Manchester, explains: “If you wanted to research José Mourinho’s interactions with Arsène Wenger, you could simply include both terms in a search, but you’d find thousands of articles in which these two terms may appear many paragraphs apart, with little or no relation to one another.

‘‘But if you instead search ‘mourinho’ AROUND(10) ‘wenger’ then the first results will be those in which Mourinho appears within 10 words of Wenger. Bear in mind that for this to work, both search terms must be in quotes, AROUND must be capitalised, and the number needs to be in parentheses.”

Quick translations

When you have more than five million free BT Wi-Fi hotspots around the UK, it’s easy to use your smartphone for everything. Broadband Genie's head of strategy Rob Hilborn suggests that, rather than flicking through the dictionary, simply tap in “translate ‘I really like cake’ into French” into your search engine and the correct phrase will pop up.

Best of all, the quality of the translation has improved leaps and bounds, so your translation will be a lot more accurate than perhaps five years ago, saving your bacon when you’re travelling abroad.

Source : telegraph.co.uk

The promise of free money should raise red flags, but what if it comes from a supposed friend?

It starts with a Facebook message.

“You’re contacted by your friend who says, ‘Hey, did you get your $250,000?’ You’re going, ‘What do you mean $250,000?'” said Greg Dunn, CEO, Better Business Bureau Hawaii.

Dunn says the scammer hijacks an account and uses its friend list to send messages, hoping people will respond.

“The next step is they send you a link to a Facebook page and that Facebook page is for a fake organization that is the World Tax and Health Organization of the world federal government, which is all completely farcical,” Dunn explained.

That’s where you’re told to enter your personal information, such as your occupation, date of birth, annual income, and mother’s middle name. Then you’re told you need to pay a $550 fee for the processing of the $250,000 check.

“You may receive then a fake document, a money order, a check for $275,000. You’re thinking, ‘Wow, this is great. I got extra money.’ So you go down to the bank and it’s a money order or a cashier’s check you deposit it at the bank,” Dunn said.

Then you’re told too much money was sent by mistake and you need to send back $25,000. It’s only when the check bounces you realize it was a scam.

Dunn says the scam has been targeting members of non-profit organizations and community groups, and at least one person here in Hawaii reported falling for the scam.

The BBB says be careful when responding to email or social media messages asking for money or personal information, even if it looks like it’s from a friend.

If you’re not sure, call the person on the phone to verify the authenticity of the message, and if it sounds too good to be true, it probably is.

Source : khon2.com

The workplace, like almost all places where people interact, can be a petri dish of conflict. Offensive remarks, unrealistic demands, people taking credit for others’ work, bullying — the transgressions that occur can take many forms. They also have the potential to escalate out of control and permanently damage relationships.

Gabrielle S. Adams, an assistant professor at the London Business School and a visiting fellow at Harvard University, has examined the role that empathy and forgiveness can play in resolving these conflicts.

In recent studies, Professor Adams found that misunderstandings often exist between the victims of harm and the people who committed the harm. In many cases, the transgressors did not intend a negative effect, whereas the victims tended to think that the damage was intentional. In addition, transgressors frequently felt guilty and wanted to be forgiven much more than their victims realized.

When someone feels wronged, it can help to actively empathize with the person who is perceived as the wrongdoer, according to a study that Professor Adams conducted along with M. Ena Inesi, also of the London Business School. That can enable the victim to realize that the transgressor may well wish to be forgiven, their study found.

They came to these conclusions, in part, by having people record diaries, over a five-day period, of situations in which they thought they had harmed or offended other people, or been harmed by or offended by others. From these diaries, wide “miscalibrations” of other people’s perceptions became apparent, Professor Adams said. “We ask victims to think about what it would be like to be the transgressor, and you reduce that miscalibration,” she said.

She also conducted a lab experiment in which people could select from a set of assignments. One of the assignments — testing out various juice flavors — was much more enjoyable than a different one: going through a set of nonsense words and crossing out the letter E.

If given first choice of an assignment, people would almost always choose the fun juice test, which meant that the other participant was forced to take the tedious letter E assignment. As a result, the second person tended to be resentful of the first person — but the first people indicated that they hadn’t intended the harm and felt guilty about it.

This is typical of many workplace conflicts, Professor Adams said. Think of bullying. Many people can cite instances in which they think they have been bullied. But how many people would say that they have bullied someone themselves?

In a conflict, the people involved almost always have a different interpretation of events, Professor Adams said. This is partly because we have a built-in tendency as humans to think that we are good people, and also that we are right.

By making it a point to resolve conflicts by encouraging empathy and forgiveness, workers and managers can improve workplace conditions, Professor Adams said.

But there is a dark side to forgiveness, she added. This is when the perceived transgressors do not think they have done anything wrong — in which case the person offering forgiveness is seen as self-righteous, in that way making the relationship even worse.

“Before you can even offer forgiveness, there needs to be some kind of mutual understanding of the transgression,” Professor Adams said. If that can be achieved, then forgiveness can help both parties move forward, she said.

Source : nytimes.com

Concern over an apocalyptic asteroid strike has risen all the way to the top: The White House released a document this week detailing a strategy for National Near Earth Object (NEO) preparedness. Morgan Freeman would no doubt be proud, although honestly, the nation might have more pressing apocalypse concerns closer to home.

Last year brought renewed interest in handling humanity-ending impact events. After a 2014 audit showed that NASA had a cruddy NEO preparedness system, the agency founded a new Planetary Defense Coordination Office (PDCO) last year to detect all of our potentially nasty NEO neighbors. The office quickly escalated talk to action, running preparedness drills with the Federal Emergency Management Agency (FEMA), launching spacecraft to gather asteroid information, and even drawing up plans to nuke the bad boys out of the sky if things get dicey.

It isn’t surprising that the White House has gotten involved. The Interagency Working Group for Detecting and Mitigating the Impact of Earth-bound Near-Earth Objects (DAMIEN) prepared the document on NEO preparedness with the goal of “enhancing the integration of existing national and international assets and adding important capabilities that are currently lacking.” DAMIEN also sounds like quite an appropriate acronym for the government apocalypse squad.

of NEOs (Image: White House)

The strategy document lists seven goals:

  1. Improve the country’s NEO tracking and classifying abilities
  2. Figure out how to move or blow up threatening NEOs
  3. Make our models and predictions better
  4. Come up with emergency procedures in case an NEO can’t be deflected
  5. Create warning systems and recovery strategies
  6. Include other countries in our planning
  7. Put together protocols and thresholds to use for quick decision making

It’s unclear if anything in the White House document will translate into action, and the whole thing is loaded with detail-devoid “the United States should do X” statements. The PDCO already has a few trays in the oven anyway, and action will probably depend on funding from President Trump, though I feel like impact preparedness is something just crazy enough to pique his interest.

Honestly at this point, odds are if a giant asteroid was careening towards Earth, we’d all die while some old guys in suits bickered in a back room. But hey, that’s not a bad thing, because there wouldn’t be war or famine anymore. Anyway, I guess it’s nice that the President’s office is worried about us or something.

Source : gizmodo.com

From the launch pad of India's Satish Dhawan Space Centre, 88 tiny satellites hitched a Valentine's Day ride into space.

The satellites joined an existing network of orbiting cameras, operated by a company called Planet, that are working in tandem towards a singular goal: to have freshly updated images of every place on Earth each day.

Planet and its competitors say the images captured by their satellite constellations are already creating new opportunities for businesses, governments, and academics who have never had access to data this frequent or fresh. But the looming challenge is how to make sense of it all, which is where players like Descartes Labs come in.

On Tuesday, Los Alamos, N.M.-based Descartes Labs unveiled its take on finding signals in the noise, with software called GeoVisual Search. Point it at a baseball diamond, and it can find other baseball diamonds in satellite imagery from across the U.S. The same goes for wind turbines, golf courses or the patchwork of circular crops you can see from high up in the sky. Give it a recognizable landmark, object or geographic feature, and GeoVisual Search will try to find others like it around the world.

Descartes Labs GeoVisual search satellite imagery

A bird's-eye view of the distribution of baseball diamonds across the United States. (Matthew Braga/CBC)

The tool, inspired by an open source project called Terrapattern, may just be a public-facing demo, but it's a glimpse at where the satellite industry is heading next.

"We have to get prepared for this onslaught of data otherwise we're going to have a bunch of pixels that won't be analyzed for years," said Mark Johnson, CEO and co-founder of Descartes Labs, in an interview. That means giving miners, logistics companies, non-governmental organizations, environmental agencies and others the ability to comb through lots and lots of imagery quickly, and do it themselves.

From suburbs to solar farms

Traditionally, access to high-resolution and up-to-date satellite imagery has been limited to those who can afford to pay. And not many people have the skills needed to work with the data that comes back. But by making satellite imagery more accessible, the hope is that businesses, governments, academics — even your average internet user — can find insights that once required deep pockets and a PhD.

To make it happen, Descartes Labs has trained a neural network on three sources of satellite imagery of varying resolutions. The network extracts features from small areas of imagery, such as edges, shapes and colours. When a user clicks on a landmark, such as a tennis court or airport runway, the GeoVisual Search software examines the features in the selected area on the map and looks for other areas with similar features. 

"We've never taught the computer what a suburb looks like specifically," says Johnson, as he clicks on a random cluster of cul-de-sacs in Iowa, which the software says look similar to the suburbs found in Edmonton. "It's really doing a search over the entire globe trying to figure out what is visually similar."

Descartes Labs GeoVisual search image 2

Descartes Labs hopes that by making satellite imagery easier to interact with, myriad industries will be interested in using its services to track assets throughout the world. (Matthew Braga/CBC)

GeoVisual Search is intended as a glimpse at what Descartes Labs can do for clients behind the scenes. The idea is to offer customers a service that can reliably identify whatever assets they want — be it ships, shipping containers, or solar farms — and track changes to those assets over time, worldwide, as new daily imagery comes in. 

"A logical next step after this is counting things precisely," says Johnson, name dropping things like oil rigs, wind turbines, manufacturing plants and dams. The company aims to make its service available later this year.

Finding signals in the noise

Descartes Labs isn't the only company operating in this space.

Orbital Insight has used machine learning to count cars in mall parking lots as a proxy for retail traffic, and measured the height of buildings under construction in China as a gauge for economic health. And last month, Planet acquired the company Terra Bella from Google. It builds and launches satellites, and also does image analysis.

Similarly, Descartes Labs has used its imagery to generate forecasts of corn production that its says are more accurate than the U.S. Department of Agriculture's own. But the next step is building a system that can do analysis for any client or industry, and do it on a country-wide or global scale, which is one of things Johnson believes sets Descartes Labs apart. 

"The dark side of a lot of data is finding salient things in large data sets becomes harder and harder," says Johnson. Both the challenge and opportunity for Descartes Labs and its competitors will be accurately separating the stuff that's important from the stuff that's not.

Author : Matthew Braga

Source : http://www.cbc.ca/news/technology/satellite-imagery-descartes-labs-geovisual-search-planet-1.4013039

SOFT ROBOTS THAT can grasp delicate objects, computer algorithms designed to spot an “insider threat,” and artificial intelligence that will sift through large data sets — these are just a few of the technologies being pursued by companies with investment from In-Q-Tel, the CIA’s venture capital firm, according to a document obtained by The Intercept.

Yet among the 38 previously undisclosed companies receiving In-Q-Tel funding, the research focus that stands out is social media mining and surveillance; the portfolio document lists several tech companies pursuing work in this area, including Dataminr, Geofeedia, PATHAR, and TransVoyant.

In-Q-Tel’s investment process.

Screen grab from In-Q-Tel’s website.

Those four firms, which provide unique tools to mine data from platforms such as Twitter, presented at a February “CEO Summit” in San Jose sponsored by the fund, along with other In-Q-Tel portfolio companies.

The investments appear to reflect the CIA’s increasing focus on monitoring social media. Last September, David Cohen, the CIA’s second-highest ranking official, spoke at length at Cornell University about a litany of challenges stemming from the new media landscape. The Islamic State’s “sophisticated use of Twitter and other social media platforms is a perfect example of the malign use of these technologies,” he said.

Social media also offers a wealth of potential intelligence; Cohen noted that Twitter messages from the Islamic State, sometimes called ISIL, have provided useful information. “ISIL’s tweets and other social media messages publicizing their activities often produce information that, especially in the aggregate, provides real intelligence value,” he said.

The latest round of In-Q-Tel investments comes as the CIA has revamped its outreach to Silicon Valley, establishing a new wing, the Directorate of Digital Innovation, which is tasked with developing and deploying cutting-edge solutions by directly engaging the private sector. The directorate is working closely with In-Q-Tel to integrate the latest technology into agency-wide intelligence capabilities.

Dataminr directly licenses a stream of data from Twitter to spot trends and detect emerging threats.

Screen grab from Dataminr’s website.

Dataminr directly licenses a stream of data from Twitter to visualize and quickly spot trends on behalf of law enforcement agencies and hedge funds, among other clients.

Geofeedia collects geotagged social media messages to monitor breaking news events in real time.

Screen grab from Geofeedia’s website.

Geofeedia specializes in collecting geotagged social media messages, from platforms such as Twitter and Instagram, to monitor breaking news events in real time. The company, which counts dozens of local law enforcement agencies as clients, markets its ability to track activist protests on behalf of both corporate interests and police departments.

PATHAR mines social media to determine networks of association.

Screen grab from PATHAR’s website.

PATHAR’s product, Dunami, is used by the Federal Bureau of Investigation to “mine Twitter, Facebook, Instagram and other social media to determine networks of association, centers of influence and potential signs of radicalization,” according to an investigation by Reveal.

TransVoyant analyzes data points to deliver insights and predictions about global events.

Screen grab from TransVoyant’s website.

TransVoyant, founded by former Lockheed Martin Vice President Dennis Groseclose, provides a similar service by analyzing multiple data points for so-called decision-makers. The firm touts its ability to monitor Twitter to spot “gang incidents” and threats to journalists. A team from TransVoyant has worked with the U.S. military in Afghanistan to integrate data from satellites, radar, reconnaissance aircraft, and drones.

Dataminr, Geofeedia, and PATHAR did not respond to repeated requests for comment. Heather Crotty, the director of marketing at TransVoyant, acknowledged an investment from In-Q-Tel, but could not discuss the scope of the relationship. In-Q-Tel “does not disclose the financial terms of its investments,” Crotty said.

Carrie A. Sessine, the vice president for external affairs at In-Q-Tel, also declined an interview because the fund “does not participate in media interviews or opportunities.”

Over the last decade, In-Q-Tel has made a number of public investments in companies that specialize in scanning large sets of online data. In 2009, the fund partnered with Visible Technologies, which specializes in reputation management over the internet by identifying the influence of “positive” and “negative” authors on a range of platforms for a given subject. And six years ago, In-Q-Tel formed partnerships with NetBase, another social media analysis firm that touts its ability to scan “billions of sources in public and private online information,” and Recorded Future, a firm that monitors the web to predict events in the future.

Unpublicized In-Q-Tel Portfolio Companies

Company Description Contract
Aquifi 3D vision software solutions  
Beartooth Decentralized mobile network  
CliQr Hybrid cloud management platform Contract
CloudPassage On-demand, automated infrastructure security  
Databricks Cloud-hosted big data analytics and processing platform  
Dataminr Situational awareness and analysis at the speed of social media Contract
Docker Open platform to build, ship, and run distributed applications Contract
Echodyne Next-generation electronically scanning radar systems Contract
Epiq Solutions Software-defined radio platforms and applications Contract
Geofeedia Location-based social media monitoring platform Contract
goTenna Alternate network for off-grid smartphone communications Contract
Headspin Network-focused approach to improving mobile application performance Contract
Interset Inside threat detection using analytics, machine learning, and big data  
Keyssa Fast, simple, and secure contactless data transfer  
Kymeta Antenna technology for broadband satellite communications  
Lookout Cloud-based mobile cybersecurity  
Mapbox Design and publish visual, data-rich maps Contract
Mesosphere Next-generation scale, efficiency, and automation in a physical or cloud-based data center Contract
Nervana Next-generation machine learning platform  
Orbital Insight Satellite imagery processing and data science at scale  
Orion Labs Wearable device and real-time voice communications platform  
Parallel Wireless LTE radio access nodes and software stack for small cell deployment  
PATHAR Channel-specific social media analytics platform Contract
Pneubotics Mobile material handling solutions to automate tasks  
PsiKick Redefined ultra-low power wireless sensor solutions Contract
PubNub Build and scale real-time apps  
Rocket Lab Launch provider for small satellites Contract
Skincential Sciences Novel materials for biological sample collection  
Soft Robotics Soft robotics actuators and systems  
Sonatype Software supply chain automation and security Contract
Spaceflight Industries Small satellite launch, network, and imagery provider Contract
Threatstream Leading enterprise-class threat intelligence platform  
Timbr.io Accessible code-driven analysis platform  
Transient Electronics Dissolvable semiconductor technology Contract
TransVoyant Live predictive intelligence platform  
TRX Systems 3D indoor location and mapping solutions  
Voltaiq SaaS platform for advanced battery analysis  
Zoomdata Big data exploration, visualization, and analytics platform Contract

Bruce Lund, a senior member of In-Q-Tel’s technical staff, noted in a 2012 paper that “monitoring social media” is increasingly essential for government agencies seeking to keep track of “erupting political movements, crises, epidemics, and disasters, not to mention general global trends.”

The recent wave of investments in social media-related companies suggests the CIA has accelerated the drive to make collection of user-generated online data a priority. Alongside its investments in start-ups, In-Q-Tel has also developed a special technology laboratory in Silicon Valley, called Lab41, to provide tools for the intelligence community to connect the dots in large sets of data.

In February, Lab41 published an article exploring the ways in which a Twitter user’s location could be predicted with a degree of certainty through the location of the user’s friends. On Github, an open source website for developers, Lab41 currently has a project to ascertain the “feasibility of using architectures such as Convolutional and Recurrent Neural Networks to classify the positive, negative, or neutral sentiment of Twitter messages towards a specific topic.”

Collecting intelligence on foreign adversaries has potential benefits for counterterrorism, but such CIA-supported surveillance technology is also used for domestic law enforcement and by the private sector to spy on activist groups.

Palantir, one of In-Q-Tel’s earliest investments in the social media analytics realm, was exposed in 2011 by the hacker group LulzSec to be in negotiation for a proposal to track labor union activists and other critics of the U.S. Chamber of Commerce, the largest business lobbying group in Washington. The company, now celebrated as a “tech unicorn” — a term for start-ups that reach over $1 billion in valuation — distanced itself from the plan after it was exposed in a cache of leaked emails from the now-defunct firm HBGary Federal.

Cover of the document obtained by The Intercept.

Yet other In-Q-Tel-backed companies are now openly embracing the practice. Geofeedia, for instance, promotes its research into Greenpeace activists, student demonstrations, minimum wage advocates, and other political movements. Police departments in Oakland, Chicago, Detroit, and other major municipalities have contracted with Geofeedia, as well as private firms such as the Mall of America and McDonald’s.

Lee Guthman, an executive at Geofeedia, told reporter John Knefel that his company could predict the potential for violence at Black Lives Matter protests just by using the location and sentiment of tweets. Guthman said the technology could gauge sentiment by attaching “positive and negative points” to certain phrases, while measuring “proximity of words to certain words.”

Privacy advocates, however, have expressed concern about these sorts of automated judgments.

“When you have private companies deciding which algorithms get you a so-called threat score, or make you a person of interest, there’s obviously room for targeting people based on viewpoints or even unlawfully targeting people based on race or religion,” said Lee Rowland, a senior staff attorney with the American Civil Liberties Union.

She added that there is a dangerous trend toward government relying on tech companies to “build massive dossiers on people” using “nothing but constitutionally protected speech.”

Author : Lee Fang

Source : https://theintercept.com/2016/04/14/in-undisclosed-cia-investments-social-media-mining-looms-large/

Yesterday, Google released a new quality raters guidelines PDF document that was specifically updated to tell the quality raters how to spot and flag offensive, upsetting, inaccurate and hateful web pages in the search results.

Paul Haahr, a lead search engineer at Google who celebrated his 15th year at the company, told us that Google has been working on algorithms to combat web pages that are offensive, upsetting, inaccurate and hateful in their search results. He said it only impacts about 0.1% of the queries but it is an important problem.

With that, they want to make sure their algorithms are doing a good job. So that is why they have updated their quality raters guidelines so that they can test to make sure the search results reflect their algorithms. If they don't that data goes back to the engineers where they can tweak things or make new algorithms or machine learning techniques to weed out even more of the content Google doesn't want in their search results.

Paul Haahr explained that there are times where people specifically want to find hateful or inaccurate information. Maybe on the inaccurate side, they like satire sites or maybe on the hate side, they hate people. Google should not prevent people from finding content that they want, Paul said. And the quality raters guidelines explains with key examples on how raters should rate such pages.

But overall, ever since the elections, Google, Facebook and others have been under fire to do something about facts and hate and more. They released fact checking schema for news stories. They supposedly banned AdSense publishers. They removed certain classes of hate and inaccurate results from the search results. And they tweaked the top stories algorithm to show more accurate and authoritative results.

Google has been working on this and they want to continue working on this. The quality raters will help make sure what the engineers are doing, does translate into proper search results. At the same time, as you all mostly know, quality raters have no power to remove search results or adjust rankings, they just rate the search results and that data goes back to the Google engineers to use.

Both Danny Sullivan dug into this and Jennifer Slegg dug into the quality raters guidelines changes. So go to those two sites to read the summaries on how Google defines them, overall it is pretty fascinating because it is not an easy solution or easy judgement calls - so Google has to define them pretty precisely.

It is an important problem, but with only 0.1% of queries impacted, seems like a lot of effort is being put on this.

Download the updated raters guidelines over here..

Forum discussion at WebmasterWorld.

Author : Barry Schwartz

Source : https://www.seroundtable.com/google-algorithms-targets-offensive-inaccurate-hate-23558.html

When Larry Page and Sergey Brin invented PageRank back in 1996, they had one simple idea in mind: Organize the web based on “link popularity.”

In short, in the universe of pages existing in a (at the time almost) shapeless web, Page and Brin wanted to organize that information to make it become knowledge. The logic was pretty simple, yet extremely powerful. First, if a page was connected to multiple pages, which in turn linked back to it, that page improved in relevance. Also if a page had less links from other pages, yet those pages
were more important, then it also improved the ranking of the linked page.

In other words, how much a page was linked to others and how much other important pages linked back to it, determined a score from 0 to 10. A higher score meant more relevance, thus more chances of being shown by what would eventually become the greatest search engines of all times, Google.

Nowadays when you open the internet browser, you are not looking at the web itself, but rather the way Google indexes it. Presently, Google is the most visited website and chances are this scenario will remain unchallenged at least in the near future.

What does that imply? Simply that if Google doesn’t know you exist, de facto you don’t. Thus, how can you make Google know you exist?

Web writing at the time of PageRank

Before 2013, machines and humans used two completely different languages. Almost like a bird of paradise’s chant is indifferent to an eagle, search engines could not understand human language, unless humans did change their writing process.

It was the birth of the web writing industry. This industry was based on a premise, follow what Google says is relevant. This premise generated a cascade of consequences that still affects the web today.

In fact, up to 2013, Google’s algorithm  took into account over 200 factors to determine the relevance of a piece of content. Yet those factors weren’t necessarily in line to what human readers wanted to see. Thus, for the first time in human history, men started to write for machines’ sake. That changed when in 2013, Google launched RankBrain.

How RankBrain and Artificial Intelligence changed web writing

Out of the more than 200 factors that Google accounts for when deciding whether the content on the web is relevant, RankBrain became the third most relevant.

Yet what is revolutionary about RankBrain is the fact that it uses Natural Language Processing (NLP) to translate human language in machine language, leaving the writing process unaffected. Thus, rather than worrying about search engine optimization, writers can finally go back and do what they have been doing best for the last five thousand years: writing compelling stories.

Although it may sound trivial for a traditional writer, that was a revolution for web writers.

There is one caveat tough. Instead of thinking in terms of the single article, writers should start thinking in terms of entities. What is an entity then?

The birth of the Semantic Web

As we saw, before 2013 Google incentivized writing standards that were tailored for machines rather than humans. This scenario changed when RankBrain was launched.

The new algorithm allowed the coming forward to a new way of thinking about the web, a semantic web. Its father was Tim Berners-Lee, which in 2006 called for a transition from web to semantic web.

Why is that relevant and what does Semantic Web stand for?

First, the semantic web is a set of rules and standards that make human language readable to machines. Second, there was a transition from the single word to the general context, or put in technical jargon from keyword to entity.

In short, to make a piece of content relevant to search engines, it was crucial to place a set of keywords within an article. Yet that strategy is not enough anymore. Indeed, what nowadays makes a piece of content relevant is the context on which it stands.

In semantic web jargon an entity is a subject which has unambiguous meaning because it has a strong contextual foundation. Although strong and solid, that foundation is in constant flux. That makes the information structured as an entity way more reliable that any set of keywords. At the same time an entity is also more powerful as it adapts to the context in which it stands.

What does that imply? A single entity can replace a whole set of keywords. Thus, making writing more human.

The future of web writing

Even though no one really knows how the future will unfold, the hope is that finally thanks to Artificial Intelligence writers will be empowered, as they will be free to write amazing stories that will enrich the human collective intelligence. In other words, instead of going from writing to web writing as unconsciously as the human race transitioned from hunter-gathering to farming, it is time to take this step forward deliberately and intentionally. That means giving the web writing’s stage to whom it belongs, human beings!

Gennaro Cuofano is a Growth Hacker at WordLift, a software company that helps web writers organize their content and reach more readers while remaining focused on what they do best, writing.

Author : Guest contributor

Source : https://bdtechtalks.com/2017/03/15/how-artificial-intelligence-is-changing-web-writing/

NETRALNEWS.COM - The number of internet users world-wide has roughly doubled in the past eight years to around 3.5 billion. The people who have come aboard in the past few years are spending their time in something that was overshadowed long ago in developed countries by apps: the mobile web browser.

Single-purpose apps like Facebook and Snapchat are the product of markets where monthly data plans and home Wi-Fi are abundant. App stores require email addresses and credit cards, two things many new phone owners just don’t have.

In places like India, Indonesia and Brazil, it’s easy to buy an Android phone for as little as $25—even less for older second-hand (or third-hand) refurbished phones. But there’s likely to be little onboard storage, and the pay-as-you-go data plan is too precious to waste on apps, especially those that send and receive data even when you aren’t using them.

Browsers are popular again, not just because typing a URL has become simpler, but also because they work harder to compensate for the nature of wireless access in emerging markets.

Southeast Asia, South Asia, South America, Mexico and Africa are all areas where the dominant browsers—Alibaba’s UC Browser, Opera Mini by Opera Software and Google Chrome from Alphabet Inc.—have the ability to compress browsing data, by up to 90% in some cases, so people burn up as little as possible. UC Browser and Opera Mini also have robust built-in ad blocking, further cutting down on data costs.

On Friday, Jana, a mobile-ad company, entered this browser market with another incentive: free daily data. By delivering 10 megabytes (or about 20 minutes) of free data a day through its mCent Browser, Jana hopes to build a following and pay for it by charging for conventional ads and sponsorship of the browser. It also intends to charge partners to be their browsers’ default search engine.

In terms of the scale of the users they have accumulated—UC Browser had more than 400 million users as of last April—these browser businesses are making a virtue out of the constraints of mobile-telecom systems in rural areas and emerging markets, where infrastructure is generations behind what it might be in richer countries.

As the global middle class continues to rise in emerging markets, browser makers are racking up users nearly as fast as Facebook did in its highest-growth period. And they are figuring out how to keep their users occupied while monetizing them through mobile advertising.

Google, Facebook and other internet giants are well aware of these trends. Two-thirds of Facebook’s users are in emerging markets, and while the company’s Free Basics program—part of Internet.org —was banned in India for favoring some websites over others, it is available in many countries in Africa and South America. And Facebook says it has upward of 200 million users on Facebook Lite, an app for low-bandwidth users.

As for Google, it benefits inherently from rapid global internet adoption, which would be impossible without Android. And while Google’s mobile Chrome browser remains dominant in many emerging markets, it also pays Opera, among others, to direct search traffic to ad-supported Google services.

It’s logical that as people in emerging markets become wealthier and their mobile infrastructure becomes better, they’ll follow the same trends as their richer peers, and their internet consumption will shift to apps. India, with its 1.3 billion people, is projected to increase its per-capita income by 125% by 2025, according to Morgan Stanley.

But for the foreseeable future, Opera, UC Browser and Jana are all betting that the ranks of these “next billion” people coming onto the internet will continue to refresh themselves—and experience constraints that mobile browsers are uniquely capable of alleviating.

“In India, the raw growth numbers are just huge—it’s both a lot more people coming online but also usage, because data is getting cheaper,” says Nuno Sitima, an executive vice president and head of mobile business at Opera Software, founded in 1995 and based in Oslo, Norway; it was sold last year to a consortium of Chinese investors for $575 million.

In terms of new downloads, Africa is growing fastest, Mr. Sitima says, while Southeast Asia, with more than 600 million people, is another huge market for these browsers. For Alibaba, which acquired UCWeb in 2014 for north of $1.9 billion, UC Browser isn’t just a browser, but a beachhead.

The company is rolling out ways to make its browser sticky, like a sprawling, aggregation-fueled news site in India, where it is the No. 1 browser. While mCent Browser is just launching in beta, Nathan Eagle, Jana’s chief executive, says the prospect of free internet is extremely appealing to users in the developing world.

To date, Jana’s core product has been an ad-powered payment system, also called mCent, on which its new browser depends. Basically, mCent pays for the airtime of users who watch ads or redeem promotions. Through relationships with 311 mobile operators in 90 countries, Jana is connected to the billing back-end of more than 4 billion mobile accounts and has leveraged that access for 30 million mCent payment users so far.

Mr. Eagle says he wants to bring a billion more people online. Google and Facebook have been working on the same problem, in part by launching balloons and drones to create airborne communication networks. “The way we’re trying to go about solving the free internet problem is a lot less sexy,” says Mr. Eagle. But by leveraging existing mobile infrastructure, along with the desire of brands like Unilever, a client of Jana’s, to reach customers in emerging markets, he argues his solution is more viable.

After all, which is more likely—getting another billion people online by flying cellular radios over their heads, or by making it more affordable to connect to cell towers that are already in range but whose cost is out of reach?

Source: http://www.en.netralnews.com/news/business/read/2500/web.browsers..not.apps..are.internet.gatekeepers.for.the....next.billion

Stephen Hawking has warned that technology needs to be controlled in order to prevent it from destroying the human race.

The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

He suggests that "some form of world government” could be ideal for the job, but would itself create more problems.

“But that might become a tyranny," he added. “All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges.”

In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.

“The real risk with AI isn't malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.

“You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.”

Tesla CEO Elon Musk shares a similar viewpoint, having recently warned that humans are in danger of becoming irrelevant.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” he said, suggesting that people could merge with machines in the future, in order to keep up.

 Author : Aatif Sulleyman

Source : http://finance.yahoo.com/news/professor-stephen-hawking-says-world-121118256.html

Page 3 of 7


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.