fbpx

Web Directories

Jay Harris

Jay Harris

Finally ready to get off the grid? It's not quite as simple as it should be, but here are a few easy-to-follow steps that will at the very least point you in the right direction.

If you're reading this, it's highly likely that your personal information is available to the public. And while you can never remove yourself completely from the internet, there are ways to minimize your online footprint. Here are five ways to do so.

Be warned however; removing your information from the internet as I've laid it out below, may adversely affect your ability to communicate with potential employers.

1. Delete or deactivate your shopping, social network, and Web service accounts

Think about which social networks you have profiles on. Aside from the big ones, such as Facebook, Twitter, LinkedIn and Instagram, do you still have public accounts on sites like Tumblr, Google+ or even MySpace? Which shopping sites have you registered on? Common ones might include information stored on Amazon, Gap.com, Macys.com and others.

To get rid of these accounts, go to your account settings and just look for an option to either deactivate, remove or close your account. Depending on the account, you may find it under Security or Privacy, or something similar.

If you're having trouble with a particular account, try searching online for "How to delete," followed by the name of the account you wish to delete. You should be able to find some instruction on how to delete that particular account.

If for some reason you can't delete an account, change the info in the account to something other than your actual info. Something fake or completely random.

new-screen-delete.png

 

Using a service like DeleteMe can make removing yourself from the internet less of a headache.

2. Remove yourself from data collection sites

There are companies out there that collect your information. They're called data brokers and they have names like Spokeo, Crunchbase, PeopleFinder, as well as plenty of others. They collect data from everything you do online and then sell that data to interested parties, mostly in order more specifically advertise to you and sell you more stuff.

Now you could search for yourself on these sites and then deal with each site individually to get your name removed. Problem is, the procedure for opting out from each site is different and sometimes involves sending faxes and filling out actual physical paperwork. Physical. Paperwork. What year is this, again?
Anyway, an easier way to do it is to use a service like DeleteMe at Abine.com. For about $130 for a one-year membership, the service will jump through all those monotonous hoops for you. It'll even check back every few months to make sure your name hasn't been re-added to these sites.

3. Remove your info directly from websites

First, check with your phone company or cell provider to make sure you aren't listed online and have them remove your name if you are.

If you want to remove an old forum post or an old embarrassing blog you wrote back in the day, you'll have to contact the webmaster of those sites individually. You can either look at the About us or Contacts section of the site to find the right person to contact or go to www.whois.com and search for the domain name you wish to contact. There you should find information on who exactly to contact.

Unfortunately, private website operators are under no obligation to remove your posts. So, when contacting these sites be polite and clearly state why you want the post removed. Hopefully they'll actually follow through and remove them.

If they don't, tip number four is a less effective, but still viable, option.
4. Delete search engine results that return information about youSearch engine results includes sites like Bing, Yahoo and Google. In fact Google has a URL removal tool that can help you delete specific URLs.

Google's URL removal tool is handy for erasing evidence of past mistakes from the internet.

For example, if someone has posted sensitive information such as a Social Security number or a bank account number and the webmaster of the site where it was posted won't remove it, you can at least contact the search engine companies to have it removed from search results, making it harder to find.

5. And finally, the last step you'll want to take is to remove your email accountsDepending on the type of email account you have, the amount of steps this will take will vary.
You'll have to sign into your account and then find the option to delete or close the account. Some accounts will stay open for a certain amount of time, so if you want to reactivate them you can.

An email address is necessary to complete the previous steps, so make sure this one is your last.

One last thing...Remember to be patient when going through this process. Don't expect it to be completed in one day. And you may also have to accept that there some things you won't be able permanently delete from the internet.

Source: http://www.cnet.com/how-to/remove-delete-yourself-from-the-internet/

If you're reading this, it's highly likely that your personal information is available to the public. And while you can never remove yourself completely from the internet, there are ways to minimize your online footprint. Here are five ways to do so.

Be warned however; removing your information from the internet as I've laid it out below, may adversely affect your ability to communicate with potential employers.

1. Delete or deactivate your shopping, social network, and Web service accounts

Think about which social networks you have profiles on. Aside from the big ones, such as Facebook, Twitter, LinkedIn and Instagram, do you still have public accounts on sites like Tumblr, Google+ or even MySpace? Which shopping sites have you registered on? Common ones might include information stored on Amazon, Gap.comMacys.com and others.

To get rid of these accounts, go to your account settings and just look for an option to either deactivate, remove or close your account. Depending on the account, you may find it under Security or Privacy, or something similar.

If you're having trouble with a particular account, try searching online for "How to delete," followed by the name of the account you wish to delete. You should be able to find some instruction on how to delete that particular account.

If for some reason you can't delete an account, change the info in the account to something other than your actual info. Something fake or completely random.

new-screen-delete.png

Using a service like DeleteMe can make removing yourself from the internet less of a headache.

Screenshot by Eric Franklin/CNET

2. Remove yourself from data collection sites

There are companies out there that collect your information. They're called data brokers and they have names like Spokeo, Crunchbase, PeopleFinder, as well as plenty of others. They collect data from everything you do online and then sell that data to interested parties, mostly in order more specifically advertise to you and sell you more stuff.

Now you could search for yourself on these sites and then deal with each site individually to get your name removed. Problem is, the procedure for opting out from each site is different and sometimes involves sending faxes and filling out actual physical paperwork. Physical. Paperwork. What year is this, again?

Anyway, an easier way to do it is to use a service like DeleteMe at Abine.com. For about $130 for a one-year membership, the service will jump through all those monotonous hoops for you. It'll even check back every few months to make sure your name hasn't been re-added to these sites.

3. Remove your info directly from websites

First, check with your phone company or cell provider to make sure you aren't listed online and have them remove your name if you are.

If you want to remove an old forum post or an old embarrassing blog you wrote back in the day, you'll have to contact the webmaster of those sites individually. You can either look at the About us or Contacts section of the site to find the right person to contact or go to www.whois.com and search for the domain name you wish to contact. There you should find information on who exactly to contact.

Unfortunately, private website operators are under no obligation to remove your posts. So, when contacting these sites be polite and clearly state why you want the post removed. Hopefully they'll actually follow through and remove them.

If they don't, tip number four is a less effective, but still viable, option.

4. Delete search engine results that return information about you

Search engine results includes sites like Bing, Yahoo and Google. In fact Google has a URL removal tool that can help you delete specific URLs.

screen-shot-2016-06-28-at-11-34-49-am.png

Google's URL removal tool is handy for erasing evidence of past mistakes from the internet.

Screenshot by Eric Franklin/CNET

For example, if someone has posted sensitive information such as a Social Security number or a bank account number and the webmaster of the site where it was posted won't remove it, you can at least contact the search engine companies to have it removed from search results, making it harder to find.

5. And finally, the last step you'll want to take is to remove your email accounts

Depending on the type of email account you have, the amount of steps this will take will vary.

You'll have to sign into your account and then find the option to delete or close the account. Some accounts will stay open for a certain amount of time, so if you want to reactivate them you can.

An email address is necessary to complete the previous steps, so make sure this one is your last.

One last thing...

Remember to be patient when going through this process. Don't expect it to be completed in one day. And you may also have to accept that there some things you won't be able permanently delete from the internet.

Editors' note: This article was originally published in December 2014. It has been updated with only a few minor tweaks.

Tuesday, 05 July 2016 03:21

Yes, the internet is like a utility

Imagine you are launching a startup and you require speedy internet access for you and your customers. Imagine you are one of the customers.

Now imagine speed that is not quite up to snuff to the Amazons and Netflix of the world — your would-be competitors. You’d quickly go under. And those consumers? Color them frustrated because they’ve been denied choice.

Preventing that is the promise of a 2-1 ruling recently by a federal appeals court for net neutrality, the concept that broadband service companies shouldn’t be able to create slow lanes and fast lanes based on ability to pay. Tech giants such as Amazon and Netflix have supported net neutrality.

Of course, that’s not how those representing broadband providers characterize the ruling by the U.S. Court of Appeals for the District of Columbia. They say the ruling for net neutrality will stymie innovation because it won’t encourage improved connections.

No; just as likely, more competitors for internet services will enter the field and they will provide innovation — and a level playing field in which the consumer benefits because of more choices. If this ruling stands, broadband companies won’t be able to divide those dependent on the internet into haves and have-nots.

This case pitted the Federal Communications Commission against those representing the broadband companies, which were clearly hungry for the ability to be high-cost gatekeepers.

The latest ruling is premised on the notion that the internet is more public utility than a mere conveyance for cat and puppy videos. It is more than an information provider, what the broadband companies argued in successfully challenging net neutrality earlier. If the ruling stands, the federal government can regulate the pipeline to encourage equal access for everyone.

It’s hard to argue with the concept. Whether for work or play, imagine your broadband service even clunkier — as in slower — than it is today.

Those opposing net neutrality have pledged to take this all the way to the U.S. Supreme Court. And there is no reason not to believe them. This is why this ruling still offers but a promise of net neutrality.

But it is a promise that portends a level playing field for businesses and consumers. We hope the Supreme Court sees this as clearly.

Source:  http://www.mysanantonio.com/opinion/editorials/article/Yes-the-internet-is-like-a-utility-8337056.php

Most search marketers understand that it's important to understand your demographics before you can be successful in an SEO campaign. You need to understand who your customers are, why they might be searching for your business, and what kinds of things they want to see when they get to your site.

Accordingly,market research is one of the first steps you'll need to take when planning an SEO campaign. With it, you'll be able to target the right keywords, craft the right content, and eventually get that target market to convert more often.

Unfortunately, there are a number of misconceptions and flawed approaches that prevent search marketers from researching their prospective audiences effectively.

Budgeting

The first problem comes in budgeting, both in terms of time and money. As you might imagine, the more time and money you invest in market research, the more raw information you're going to get. If you don't invest enough time, for example, you may not gather enough information to form a suitable conclusion. If you don't invest enough money, you might not get reliable information. But the problem also extends to the other end of the spectrum; if you invest too much at the outset, you may end up with redundant information or waste too much time and money for your information to be worth it.

The Right Questions to Ask

You also need to know what kinds of questions to ask. Simply learning "more" about your users isn't going to help you directly when it comes to planning your target keywords, creating an overall content strategy, or sketching a plan for your link building campaign. Keep your focus not on independent identifiers (such as education level or geographic location), but instead on how those identifiers relate to your campaign (such as how familiar they are with your industry, or how they're likely to search). Most marketers get caught up in seeking information without a tie back to a practical takeaway.

Sources

Most marketers end up relying only on one or two sources of information; this is inherently flawed. Different data sets are going to offer you slightly different insights, based on their selection samples and their approaches. It's far better to collect information from multiple sources to ensure you have the broadest perspective possible on your target market.

You'll also want to make sure you consult both primary and secondary sources. Secondary sources are sources that have already conducted research and have formed conclusions; for example, the US Census Bureau offers tons of demographic information you can access for free. Primary sources rely on your own research, and often take the form of surveys, interviews, or other firsthand methods of gathering information. These have complementary advantages and disadvantages, so be sure to take advantage of both.

Buying Cycle Considerations

Many search marketers also neglect an important aspect of demographics; the buying cycle. You might know your average customer's interest level, demographic makeup, and maybe even a bit about their search behavior, but at what point in the buying cycle are you targeting them? Are you looking for customers early in the research phase, or customers ready to buy immediately? There's a broad spectrum here (for most businesses), and you can get very different answers from the same target market based on where you set your goals.

The Right Demographics

When it comes to market research, most search marketers start with a demographic in mind. They then work to find more information about this demographic, using the methods and considerations I've mentioned above. This is useful, but it depends on one crucial assumption: that you've chosen the right demographic in the first place. Part of this question ties back to a broader question of your business, but don't underestimate it, and don't leave your assumptions unchallenged. Another demographic may exist in greater numbers, with a greater interest in your business--so don't leave any rocks unturned here.

If you can proactively identify and correct these misconceptions and flawed approaches before they interfere with your market research, you'll establish a better course for your organization's SEO campaign. This isn't a guarantee that all yourinformation will be accurate, or that all your other market research techniques are correct, but it will help you avoid some of the most common pitfalls that prevent this work from being effective. From here, you can combine your market research with your competitive research, and start collecting the best target keywords for your campaign.

Source:  http://www.inc.com/samuel-edwards/what-search-marketers-get-wrong-about-demographic-research.html

Earlier this week, Samsung rolled out support for ad blocking in the new version of its web browser for mobile devices, the Samsung Internet Browser. Third-party developers quickly responded by launching ad-blocking mobile apps that work with the browser. Now those developers are finding their apps are being pulled from the Google Play, and their updates are being declined. The reason? It seems Google doesn’t want ad blockers to be distributed as standalone applications on its Google Play store.

In case you missed it: a few days ago, Samsung introduced ad blocking within its mobile web browser. The feature works a lot like Apple’s support for ad blocking in Safari, which arrived with the release of iOS 9. Specifically, Samsung launched a new Content Blocker extension API which allows third-party developers to build mobile apps that, once installed, will allow those surfing the mobile web via Samsung’s browser to block ads and other content that can slow down web pages, like trackers.

Apparently, Google – which just so happens to be in the ad business itself – is not a fan of this new functionality.

One of the first third-party ad blockers to launch following Samsung’s announcement was Adblock Fast. The app quickly become the top free app on Google Play in the “Productivity” category, but has since been banned from Google Play.

According to Rockship Apps founder and CEO Brian Kennish, maker of Adblock Fast, Google’s app reviews team informed him the app was being removed for violating “Section 4.4” of the Android Developer Distribution Agreement.

This is the section that informs developers they can’t release apps that interfere with “the devices, servers, networks, or other properties or services of any third-party including, but not limited to, Android users, Google or any mobile network operator.”

If that text sounds a little broad-reaching and vague, that’s because it is. It’s also what allows Google to react to changes in the industry, like this one, on the fly.

adblock-samsung

Kennish says that Google’s app reviews team informed him that he could resubmit after modifying his app so it didn’t “interfere with another app, service or product in an unauthorized manner.”

“We’ve been trying to contact Google through their public channels since Monday, and I tried through private ones all day yesterday…but we haven’t gotten any official response from a human – just autoresponders,” notes Kennish.

He suspects that Adblock Fast was the first to be pulled from Google’s app store because it had climbed the charts so quickly and had achieved a 4.25 rating. Kennish says that the app had around 50,000 installs at the time of its removal.

In addition, the company could have gotten on Google’s radar by pushing out an update that offered a better user experience. (Some people didn’t realize it only worked on Samsung’s 4.0 browser and left 1-star reviews. The update was meant to better highlight the app’s requirements.)

crystal-android

Meanwhile, as of the time of writing, other ad blockers are still live, including Crystal and Adblock Plus (Samsung Browser). However, that may not be the case for long.

Crystal’s developer Dean Murphy also just submitted an update that’s just been declined by Google’s app review team for the same reason cited above. Again, Google references section 4.4 of the Developer Agreement as the reason for stopping the update from going live.

“I have appealed the update rejection, as I assume that I am rejected for ‘interfering’ with Samsung Internet Browser, citing the developer documentation that Samsung have for the content blocking feature,” explains Murphy. “I’m still awaiting their reply.”

Adblock Plus tells us that its new app, an extension for Samsung’s browser, is still live, and they have not yet heard from Google about its removal. However, they have also not tried to update the app yet, according to co-founder and CEO Till Faida.

From our understanding of the situation, Google will continue to support mobile browsers that can block ads within themselves, either via built-in functionality (as with the Adblock Plus browser), or via extensions (as with Firefox, Javelin, Dolphin browsers, etc,) but only when those extensions are not distributed via APKs (downloadable apps) on Google Play.

Or to put it more simply: browser apps that block ads are okay; ad blocking apps are not.

It’s not clear at this time why Crystal and Adblock Plus (Samsung Browser) have not also been pulled from Google Play. But killing a developer’s ability to update their app has a similar effect as a full removal, in terms of both sending a message to the individual app developer, as well as the wider developer community.

Reached for comment, a spokesperson for Google only offered the following statement:

“While we don’t comment on specific apps, we can confirm that our policies are designed to provide a great experience for users and developers.”

Given the situation at hand, it seems that Samsung will need to re-evaluate how its ad-blocking feature is being implemented. Either it will need to build in support for non-APK extensions, or it will need to figure out another way for developers to distribute their APK files outside of Google Play, such as in a self-hosted app store.

Source:  http://techcrunch.com/2016/02/03/google-boots-ad-blockers-from-google-play/

Google is marking Safer Internet Day, which falls today, by introducing new authentication features to Gmail to help better identify emails that could prove to be harmful or are not fully secure.

The company said last year that it would beef up security measures and identify emails that arrive over an unencrypted connection and now it has implemented that plan for Gmail, which Google just announced has passed one billion active users. Beyond just flagging emails sent over unsecured connections, Google also warns users who are sending.

Gmail on the web will alert users when they are sending email to a recipient whose account is not encrypted with a little open lock in the top-right corner. That same lock will appear if you receive an email from an account that is not encrypted.

Encryption is important for email because it lowers the possibility that a message might be hijacked by a third-party. Google switched to HTTPS some while ago to ensure that all Gmail-to-Gmail emails are encrypted, but not all other providers have made the move. Last year, Google said that 57 percent of messages that users on other email providers send to Gmail are encrypted, while 81 percent of outgoing messages from Gmail are, too.

Another measure implemented today shows users when they receive a message from an email account that can’t be authenticated. If a sender’s profile picture is a question mark, that means Gmail was not able to authenticate them.

Authentication is one method for assessing whether an email is a phishing attempt or another kind of malicious attack designed to snare a user’s data or information.

“If you receive a message from a big sender (like a financial institution, or a major email provider, like Google, Yahoo or Hotmail) that isn’t authenticated, this message is most likely forged and you should be careful about replying to it or opening any attachments,” Google explained in its Gmail help section.

Unauthenticated emails aren’t necessarily dangerous, but, with this new indicator, Google is giving users more visibility on potential threats to help them make better decisions related to their online security.

Finally, because good news is supposed to come in threes, Google said today that it is gifting users 2GB of addition storage for Google Drive at no cost. To grab the freebie, simply complete the new security checkup for your Google account.

The process, which Google claimed takes just two minutes, will see you check your recovery information, which devices are connected to your account and what permissions that you’ve enabled. Google offered the same deal last year for Safer Internet Day, and the company said the 2GB expansion is open to all users — including those who snagged 2GB last year. (Small caveat: the offer isn’t open to Google Apps for Work or Google Apps for Education accounts.)

Simply head to your Google account to get started.

Source:  http://techcrunch.com/2016/02/09/gmail-now-warns-users-when-they-send-and-receive-email-over-unsecured-connections/

As Google increasingly incorporates direct answers and other types of featured snippets into search results pages, columnist Andrew Shotland points out that businesses may want to get smarter about marking up their pages.

I have been noticing a lot of Google Answer Boxes showing up for queries with local intent these days. My recent post, Are You Doing Local Answers SEO? pointed out this fantastic result HomeAdvisor is getting for “replace furnace” queries:

Replace Your Damn Furnace Already

When clients get these local answer boxes, they often perform significantly better than regular #1 organic listings. In our opinion, these seem to be driven primarily by the following factors:

Domain/page authority

Text that appears to answer the query
Easy-to-understand page structures (broken up into sections that target specific queries, tables, prices and so on). Schema is not necessary here, but it helps.
For more of a deep dive on how these work, see Mark Traphagen’s excellent summary of last year’s SMX West panel on The Growth of Answers SEO.

But I am not here to talk about how great answer boxes are. I am here to talk about this result that recently popped up for “university of illinois apartments”:

Google Answer Boxes Gone Wild

At first glance, you might think this was a basic list of apartments for rent near the university. But if you look closer at the grid of data, you will see that it looks more like part of a calendar, which is pretty useless.

Many searchers may look past this and just click on the link, but this got me thinking that I really don’t want Google controlling what parts of my site get shown in the SERPs, particularly when it looks more like a Lack of Knowledge Box.

Think about if you had some unsavory user comments on the page that appeared in the answer box. Not only would this be a useless result, but it also might be damaging to your brand. The apartments result might make some searchers think ApartmentFinder is a bad site. So what went wrong here?

If you examine the ApartmentFinder URL in the answer box, you’ll notice that it doesn’t display any calendar in the UI. But if you search the code for “calendar,” you’ll see:

Calendar Code

This shows that there is some kind of calendaring app in a contact form.

As you can see from the next screen shot, the first Contact button that appears on the page is fairly close to the h1 that reads, “81 apartments for rent near the University of Illinois”:

Calendar Contact

And if you click on the Contact button, you get a pop-up form with a calendar:

Calendar Pop Up

It seems that Google is:

assuming the query deserves a more complex list of results than the standard SERP;
looking for the data closest to the strongest instance of the query text on the page (the h1); and
taking the first thing that looks like a table of data and putting it on the SERP. (I am sure it’s more complicated than that, but not too sure.)

So what can you do to avoid this?

Mark up your data with schema.org markup. This should give you the best chance of avoiding Google getting your info wrong. (On that note, the Schema.org site itself is kind of a drag to use. Try Google’s own site on Structured Data. It has all of the schema stuff you’ll need, plus some stuff that isn’t on Schema.org.)
Make sure the content you want to appear in answer boxes is closest to the on-page text that has the strongest match for the query — often the h1, but this could be a subheading, as well. If possible, make multiple subheadings that target different queries (e.g., “cheap apartments for rent,” “pet friendly apartments,” and so on) that might be the best results. For more on why this might be important, check out Dave Davies’ great take on the recent presentation from SMX West on how Google works by Google’s Paul Haahr. And while you’re at it, Rae Hoffman’s take on it is pretty great, too.
Put your content in a simple table on the page, or at least make it easy for Google to build its own. The fact that ApartmentFinder doesn’t mark any of its listings on that page with what type of listing it is makes it hard for Google to show a table of, say, one-bedroom apartments for rent at specific prices. Just adding “1BR” in text on each one-bedroom result may be enough to fix the problem.

Figuring out how to impact the answer box displays is akin to what we all went through trying to figure out how to influence what shows up for titles, descriptions and rich snippets. It can take a bit of trial and error, but when it works, it can be the answer to your SEO prayers.

Source:  http://searchengineland.com/dont-trust-google-structure-local-data-246585

Wednesday, 18 May 2016 04:09

What Am I Searching (And Do I Care?)

Indexed discovery services are a lot like Google: you can search simply and get results quickly, but the sources you are rocky-trailsearching tend to be mysterious. Do you know if you are searching specialized sources or generic sources? Authoritative or with an agenda? When a researcher pushes the search button, they get whatever results are deemed relevant from whatever sources are included, and they can’t limit their search to only the sources that matter to them.

 

For some researchers, knowing the sources behind the search really makes no difference to them at all. To these researchers, often undergraduates, it’s the results that count. Most results nowadays do show the source, publication or journal that result is from. This makes it somewhat easier to eyeball a page of results, disregard those from irrelevant sources, or select results as appropriate if they are from an authoritative source. But that research methodology seems inefficient to say the least.

 

Serious researchers, on the other hand, want to know what they are searching. If they know that their information will most likely be in three or four specific resources out of the 20 or 30 their organization subscribes to, then why should they wade through a massive results list or spend one iota of extra time filtering out the extra sources to view their nuggets of information? (Quick answer, they shouldn’t.)

 

To begin to combat this resource transparency problem, libraries are creating separate web pages of source lists and descriptions for serious researchers. Who is the provider? What is the resource and what information exactly does the resource provide? These pages also include the categories of sources searched such as ebooks, articles, multimedia collections, etc. While these lists of resources are certainly helpful to document, is it fair to ask researchers to reference a separate web page to understand what digital content is included in their search, particularly when they are urgently trying to find something from a particular source of information? Or should we ask researchers to disregard knowing what sources they are searching and to just pay attention to the results?  Neither of these seems appropriate in this day and age.

 

Most of us know the benefits of a single search of all resources. One search improves the efficiency of searching disparate sources, makes comparing and contrasting results faster, and provides an opportunity to save, export or email selected results. However, Explorit Everywhere! goes one step further by lending transparency of sources to researchers so they can search even faster. One of our customers mentioned that they moved from a well known discovery service because they were frustrated with all of the news results that were returned. It didn’t help that their researchers couldn’t select specific sources to search, particularly when their searches always seemed to bring back less than relevant results.

 

Explorit Everywhere! helps to narrow a search up front with not only the standard Advanced Search fields, but a list of sources to pick and choose from. A researcher looking to search in four different sources doesn’t want to run a search against 25 sources. They can narrow the playing field to hone in on the needle in the haystack faster. And from the results page, they can limit to each individual source to view only those results, in the order that the source ranked them.  A serious researcher’s dream? That’s what we’ve heard. 

 

Not all researcher’s care about drilling down into individual sources like this. But in Explorit Everywhere! the option is there to search the broad or narrow path. We even filter out the rocks.

 

Source:  http://www.deepwebtech.com/2016/03/what-am-i-searching-and-do-i-care/

Indexed discovery services are a lot like Google: you can search simply and get results quickly, but the sources you are rocky-trailsearching tend to be mysterious. Do you know if you are searching specialized sources or generic sources? Authoritative or with an agenda? When a researcher pushes the search button, they get whatever results are deemed relevant from whatever sources are included, and they can’t limit their search to only the sources that matter to them.

For some researchers, knowing the sources behind the search really makes no difference to them at all. To these researchers, often undergraduates, it’s the results that count. Most results nowadays do show the source, publication or journal that result is from. This makes it somewhat easier to eyeball a page of results, disregard those from irrelevant sources, or select results as appropriate if they are from an authoritative source. But that research methodology seems inefficient to say the least.

Serious researchers, on the other hand, want to know what they are searching. If they know that their information will most likely be in three or four specific resources out of the 20 or 30 their organization subscribes to, then why should they wade through a massive results list or spend one iota of extra time filtering out the extra sources to view their nuggets of information? (Quick answer, they shouldn’t.)

To begin to combat this resource transparency problem, libraries are creating separate web pages of source lists and descriptions for serious researchers. Who is the provider? What is the resource and what information exactly does the resource provide? These pages also include the categories of sources searched such as ebooks, articles, multimedia collections, etc. While these lists of resources are certainly helpful to document, is it fair to ask researchers to reference a separate web page to understand what digital content is included in their search, particularly when they are urgently trying to find something from a particular source of information? Or should we ask researchers to disregard knowing what sources they are searching and to just pay attention to the results?  Neither of these seems appropriate in this day and age.

Most of us know the benefits of a single search of all resources. One search improves the efficiency of searching disparate sources, makes comparing and contrasting results faster, and provides an opportunity tosave, export or email selected results. However, Explorit Everywhere! goes one step further by lending transparency of sources to researchers so they can search even faster. One of our customers mentioned that they moved from a well known discovery service because they were frustrated with all of the news results that were returned. It didn’t help that their researchers couldn’t select specific sources to search, particularly when their searches always seemed to bring back less than relevant results.

Explorit Everywhere! helps to narrow a search up front with not only the standard Advanced Search fields, but a list of sources to pick and choose from. A researcher looking to search in four different sources doesn’t want to run a search against 25 sources. They can narrow the playing field to hone in on the needle in the haystack faster. And from the results page, they can limit to each individual source to view only those results, in the order that the source ranked them.  A serious researcher’s dream? That’s what we’ve heard

Not all researcher’s care about drilling down into individual sources like this. But in Explorit Everywhere! the option is there to search the broad or narrow path. We even filter out the rocks.

Google's Self-Driving Car Project and Fiat Chrysler last week announced that they would integrate autonomous vehicle technology into 2017 Chrysler Pacifica Hybrid minivans as part of Google's testing program.

It is the first time Google has worked directly with a car manufacturer to integrate its self-driving technology into a passenger vehicle.

 

It will add 100 Chrysler Pacifica Hybrid vehicles, designed and engineered by Fiat Chrysler, to its existing self-driving test program -- more than doubling the number of cars participating in the program. Google will integrate the sensors and computers the vehicles use to navigate roads without a driver.

 

Both companies will place a portion of their engineering teams in a facility in southeastern Michigan to speed up the development of self-driving cars.

 

Safer Roads

 

"The opportunity to work closely with FCA engineers will accelerate our efforts to develop a fully self-driving car that will make our roads safer and bring everyday destinations within reach for those who cannot drive," said John Krafcik, CEO of Google's Self-Driving Car Project.

 

Self-driving technology has the potential to prevent 33,000 auto-related deaths per year, 94 percent of them due to human error, the companies said.

 

Google is testing self-driving cars in four U.S. cities: Mountain View, Calif.; Austin, Texas; Kirkland, Wash.; and Phoenix. Google's self-driving team will test the self-driving minivans on its private test track in California, prior to being deployed on public roads, the company said.

 

Google won't sell the vehicles being tested with the autonomous technology. However, the team is studying how community members perceive and interact with the autonomous vehicles, and based on that the vehicle performance will be smoothed out to make them feel more natural to people inside and outside of the vehicles, Google's Self-Driving Car Project said in a statement provided to TechNewsWorld by spokesperson Lauren Barriere.

 

Google Steps Ahead

 

The announcement means Google has taken a huge leap forward ahead of the competition in the development of self-driving cars, according to Colin Bird, senior analyst at IHS.

 

"Google is on the vanguard of deploying self-driving, driverless car software," he told TechNewsWorld. "The main issue they were facing was who was going to license it for the vehicles, as Google has shown no indication of wanting to make a vehicle themselves."

 

The collaboration suggests that Fiat Chrysler would be interested in deploying Google's L5 technology -- driverless and requiring no human intervention -- when the system is commercialized, Bird suggested.

 

Until now, Google has been using modified Lexus and Toyota SUVs and hybrids as well as 100 pod cars developed by its own engineers.

 

The Chrysler Pacifica minivan could become part of an autonomous on-demand network of vehicles through a Car as a Service, Bird said. The minivan is "space optimized, features plenty of seats, and previous FCA minivan models have been modified to be wheelchair accessible."

 

Other Chrysler minivan models have been integral components in car-for-hire fleets, he noted.

 

Industry-Wide Race

 

The Google announcement marks the latest advance in the rush to develop autonomous vehicles.

 

Late last month, Google announced an alliance with Ford, Uber, Lyft and Volvo called the Self-Driving Coalition for Safer Streets, designed to promote the safety of autonomous vehicles. David Strickland, formerly of the U.S. National Highway Traffic Safety Administration, was named the national spokesperson for the coalition.

 

Apple last month reportedly hired Chris Porritt, former vice president of vehicle engineering at Tesla, to head up its Project Titan top-secret car program in Germany.

 

Source: http://www.technewsworld.com/story/83482.html

 

 

Safer Roads

"The opportunity to work closely with FCA engineers will accelerate our efforts to develop a fully self-driving car that will make our roads safer and bring everyday destinations within reach for those who cannot drive," said John Krafcik, CEO of Google's Self-Driving Car Project.

Self-driving technology has the potential to prevent 33,000 auto-related deaths per year, 94 percent of them due to human error, the companies said.

Google is testing self-driving cars in four U.S. cities: Mountain View, Calif.; Austin, Texas; Kirkland, Wash.; and Phoenix. Google's self-driving team will test the self-driving minivans on its private test track in California, prior to being deployed on public roads, the company said.

Google won't sell the vehicles being tested with the autonomous technology. However, the team is studying how community members perceive and interact with the autonomous vehicles, and based on that the vehicle performance will be smoothed out to make them feel more natural to people inside and outside of the vehicles, Google's Self-Driving Car Project said in a statement provided to TechNewsWorld by spokesperson Lauren Barriere.

Google Steps Ahead

The announcement means Google has taken a huge leap forward ahead of the competition in the development of self-driving cars, according to Colin Bird, senior analyst at IHS.

"Google is on the vanguard of deploying self-driving, driverless car software," he told TechNewsWorld. "The main issue they were facing was who was going to license it for the vehicles, as Google has shown no indication of wanting to make a vehicle themselves."

The collaboration suggests that Fiat Chrysler would be interested in deploying Google's L5 technology -- driverless and requiring no human intervention -- when the system is commercialized, Bird suggested.

Until now, Google has been using modified Lexus and Toyota SUVs and hybrids as well as 100 pod cars developed by its own engineers.

The Chrysler Pacifica minivan could become part of an autonomous on-demand network of vehicles through a Car as a Service, Bird said. The minivan is "space optimized, features plenty of seats, and previous FCA minivan models have been modified to be wheelchair accessible."

Other Chrysler minivan models have been integral components in car-for-hire fleets, he noted.

Industry-Wide Race

The Google announcement marks the latest advance in the rush to develop autonomous vehicles.

Late last month, Google announced an alliance with Ford, Uber, Lyft and Volvo called the Self-Driving Coalition for Safer Streets, designed to promote the safety of autonomous vehicles. David Strickland, formerly of the U.S. National Highway Traffic Safety Administration, was named the national spokesperson for the coalition.

Apple last month reportedly hired Chris Porritt, former vice president of vehicle engineering at Tesla, to head up its Project Titan top-secret car program in Germany.

The prospective scale of the Internet of Things (IoT) has the potential to fill anyone looking from the outside with the technical equivalent of agoraphobia. However, from the inside, the view is very different. Looked at in detail, it is a series of intricate threads being aligned by a complex array of organizations.

As with any new technological epoch, questions around shape, ownership and regulation are starting to rise. Imagine trying to build the Internet again. It’s like that, but at a bigger scale.

The first hurdle is that of technological standards. We are at a pivotal moment in the development of the IoT. As the diversity of connected things grows, so does the potential risk from not allowing each “thing” to talk to one another.

This begins with networking standards. From ZigBee to Z-Wave, EnOcean, Bluetooth LE or SigFox and LoRa, there are simply too many competing and incompatible networking standards from which to choose. Luckily enough, things seem to be converging and consolidating.

Moreover, the already well-established alliances are regrouping. First in the indoors world, where ZigBee 3.0 is getting closer to Google’s Thread — albeit still challenged by the Bluetooth consortium, who are about to release the Bluetooth mesh standard. More interestingly, the Wi-Fi Alliance is working on IEEE 802.11ah known as HaLow. All three standards specifically target lower power requirements and better range tailored for the IoT.

Similarly, in the outdoors world, the Next Generation Mobile Networks (NGMN) Alliance (working closely with the well-established GSMA, ruling the world of mobile standards) is working on an important piece of the puzzle for the world of smart things: 5G. With increased data range, lower latency and better coverage, it is vital to handle the multitude of individual connections and will be a serious global competitor to the existing LPWAN (Low Power Wireless Area Networks), such as SigFox and LoRa.

Security is one of the biggest barriers preventing mainstream consumer IoT adoption.

Whilst trials are currently taking place, commercial deployment is not expected until 2020. Before this can happen, spectrum auctions must be completed; typically a government refereed scrap between technology and telecoms companies, with battle lines drawn on price. It’s important to put an early stake in the ground with regulators to ensure sufficient spectrum is available at a cost that encourages IoT to flourish, instead of being at the mercy of inflated wholesale prices.

But the challenge doesn’t stop at the network level; the data or application level is also a big part of the game. The divergence in application protocols is only being compounded as tech giants begin to make a bid to capture the space. Apple HomeKit, Google Weave and a number of other initiatives are attempting to promote their own ecosystems, each with their own commercial agendas.

Left to evolve in an unmanaged way, we’ll end up with separate disparate approaches that will inexcusably restrict the ability of the IoT to operate as an open ecosystem. This is a movie we’ve seen before.

The web has already been through this messy process, eventually standardizing itself by Darwinian principles of technology and practices of use. The web provided a simple and scalable application layer for the Internet, a set of standards that any node of the Internet could use whatever physical technology it uses to connect to the Internet.

The web is what made the Internet useful and ultimately successful. This is why a Web of Things (WoT) approach is essential. Such an approach has substantial support already. AWeb Thing Model has recently been submitted to W3C, based on research done by a mixture of tech giants, startups and academic institutes. These are early tentative steps toward an open and singular vision for the IoT.

The resolution of this issue opens up the possibility of a vast collaborative network, where uniform data can optimize a wild array of existing processes. However, as data gradually becomes the most valuable asset of a slew of once inanimate objects, what does this mean for legacy companies who build the products which have had no previous data strategy?

The tech sector is comfortable with sharing and using such information, but for companies that have their grounding in making everything from light bulbs to cars, this is a new concept. Such organizations have traditionally had a much more closed operational approach, treating data like intellectual property — something to be locked away.

 

To change this requires a cultural shift inside any business. Whilst this is not insurmountable by any means, it brings to the fore the need to effect a change in mind-set inside the boardroom. For such a sea change to happen, it will require education, human resources and technology investment.

The security of a smart object is only as strong as its weakest connected link.

Security is one of the biggest barriers preventing mainstream consumer IoT adoption. A Fortinet survey found that 68 percent of global homeowners are concerned about a data breach from a connected device. And they should be: Take a quick look at Shodan, an IoT search engine that gives you instantaneous access to thousands of unsecured IoT devices, baby monitors included! In 2015, the U.S. Federal Trade Commission stated that “perceived risks to privacy and security…undermine the consumer confidence necessary for technologies to meet their full potential.”

For manufacturers to boost consumer confidence, they must be able to demonstrate that their products are secure, something that seems to have come under increasing pressure lately. The problem with security is that it is simply never achieved. Security is a constant battle against the clock, deploying patches and improvements as they come.

This clearly can be overwhelming for product manufacturers. In order to do this, relying on an established IoT platform that has implemented comprehensive and robust security methodologies and that can guide them through such a complex area is a wise move.

Consumers also share some responsibility in increasing the security of their data — by using strong passwords for product user accounts and on Internet-facing devices, like routers or smart devices; use of encryption (like WPA2) when setting up Wi-Fi networks; and installing any software updates promptly.

However, as consumer adoption of IoT rises, it is critical for manufacturers to ensure that the security of smart, connected products is at the heart of their IoT strategy. After all, the security of a smart object is only as strong as its weakest connected link.

Coupled with security, emergent issues around data privacy, sharing and usage will become something everyone will have to tackle, not just tech companies. In the data-driven world of IoT, the data that gets shared is more personal and intimate than in the current digital economy.

For example, consumers have the ability to trade though their bathroom scales protected data such as health and medical information, perhaps for a better health insurance premium. But what happens if a consumer is supposed to lose weight, and ends up gaining it instead? What control can consumers exert over access to their data, and what are the consequences?

Consumers should be empowered with granular data-sharing controls (not all-or-nothing sharing), and should be able to monetize the data they own and generate. Consumers should also have a “contract” with a product manufacturer that adjusts over time — whether actively or automatically — and that spells out the implications of either a rift in data sharing, or in situations where the data itself is unfavorable.

The onus here also lies on regulators to ensure that legal frameworks are in place to build trust into the heart of the IoT from the very beginning. The industry needs embrace this and embark on an open and honest dialogue with users from the very beginning. Informed consent will never be more important, as data and metadata from connected devices is able to build a hyper-personalized picture of individuals.

Brands would be wise to understand that the coming influx of consumer data is a potential revenue stream that must be protected and nurtured. As such, the perception of privacy and respect are tantamount for long-term engagement with customers. So much so that it is likely that product manufacturers will start changing their business models to create data-sharing incentives and perhaps even give their products away for free.

Due to its massive potential, the Internet of Things is advancing apace, driven largely by technology companies and academic institutions. However, only through wide-scale education and collaboration outside of this group, will it truly hit full stride and make our processes, resources utilization and, ultimately, our lives, better.

Source:  http://techcrunch.com/2016/02/25/the-politics-of-the-internet-of-things/ 

Researchers at nine UK universities will work together over the next three years on a £23m ($33.5m) project to explore the privacy, ethics, and security of the Internet of Things.

 

The project is part of 'IoTUK', a three-year, £40m government programme to boost the adoption of IoT technologies and services by business and the public sector.The Petras group of universities is led by UCL with Imperial College London, University of Oxford, University of Warwick, Lancaster University, University of Southampton, University of Surrey, University of Edinburgh, and Cardiff University, plus 47 partners from industry and the public sector.

 

Professor Philip Nelson, chief executive of the UK's Engineering and Physical Sciences Research Council, said in the not-too-distant future almost all our daily lives will be connected to the digital world, while physical objects and devices will be able to interact with each other, ourselves, and the wider virtual world.

 

"But, before this can happen, there must be trust and confidence in how the Internet of Things works, its security, and its resilience," he said.

The research will focus on five themes: privacy and trust; safety and security; harnessing economic value; standards, governance, and policy; and adoption and acceptability. Each will be looked at from a technical point of view and the impact on society.

 

Initial projects include large-scale experiments at the Queen Elizabeth Olympic Park; the cybersecurity of low-power body sensors and implants; understanding how individuals and companies can increase IoT security through better day-to-day practices; and ensuring that connected smart meters are not a threat to home security.

 

It's still early days for the IoT but already concerns have surfaced about the security and privacy of the technology, and how the data generated by, for example, fitness monitors or other home systems can be used by the companies that collect it.

 

Source: http://www.zdnet.com/article/researchers-investigate-the-ethics-of-the-internet-of-things/

Page 6 of 7

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.