Web Directories

Jason bourne

Jason bourne

This rise in brand value meant Google overtook Apple – which saw its brand value fall 8% to $228m – while Microsoft remained in the third slot.

Other additions to the top 10 were Facebook in fifth position ( up 44% to $103bn ) and Amazon in seventh ( up 59% to $100m ).

Overall the telecoms sector was the strongest performer in the 2016 UK ranking, with Vodafone and BT taking first and third places respectively.

Vodafone held onto its top ranking despite brand value dropping 4% ( $37m ) while BT’s rose 3% ( $19m ). HSBC remained in second place with a value of $20m, a 16% fall on the previous year.

The BrandZ ranking shows it has been another challenging year for the UK, as the total value of its Top 10 brands dropped to $137bn, down 8% from 2015. This compares with an increase of 5% in mainland Europe.

North America’s 10% growth was significantly less than its 19.1% increase last year, suggesting the influence of the economic slowdown in China, along with global financial issues and uncertainty cannot be under-estimated.

Peter Walshe, global BrandZ director at Millward Brown, said: “It’s clear from the BrandZ rankings that innovation, whether that is delivering something new or disrupting an existing market, plays a critical role in a brand’s success, both in the UK and around the world.

“More than that, it is also about consumer perception so it is essential that they shout about their achievements and then deliver on their promises. This is ably demonstrated by the notable success stories of brands such as Vodafone, BT, Dove and Lipton, which are thriving despite the impact that the current global outlook is having on the UK.”

Source:  https://www.research-live.com/article/news/google-overtakes-apple-as-most-valuable-brand/id/5007942

People are spending less time on social media apps, in some cases substantially less, a new study from marketing intelligence firm SimilarWeb found.

The company compared Android users' daily time spent on Facebook, Instagram, Twitter and Snapchat from January to March 2016 with the same period in 2015. The firm looked at data from the U.S, UK, Germany, Spain, Australia, India, South Africa, Brazil and Spain.

Facebook's Instagram saw the biggest year-over-year drop — usage was down 23.7 percent this year, closely followed by Twitter (down 23.4 percent), Snapchat (down 15.7 percent) and Facebook (down 8 percent), the study found.

Twitter's stock is trading down around 34 percent, and Facebook's stock is up almost 14 percent so far this year.

In the U.S. — typically social media's most lucrative market — Instagram use was down 36.2 percent, Twitter was down 27.9 percent, Snapchat was down 19.2 percent and Facebook fell 6.7 percent. Despite this drop, Facebook users in the U.S. continued to spend the most time using the app: 45 minutes and 29 seconds every day on average. Facebook users in India used the app the least, spending 22 minutes and 59 seconds daily, on average.

Americans are also the biggest Snapchatters, spending 18 minutes and 43 seconds using the app daily, followed by the French (16 minutes and 7 seconds), and then the British (15 minutes and 27 seconds).

Across all four apps, users spent the least time using Twitter. Spanish users spent the most time using the app (13 minutes and 31 seconds daily), closely followed by Americans (13 minutes and 30 seconds) and the French (13 minutes and 7 seconds). This was despite overall declines in usage across these geographies.

Current installs — the number of apps installed on devices — for the big four social media apps among Android users in the countries studied were down nine percent year over year. Meanwhile, Facebook's messaging apps — WhatsApp and Messenger — increased installs, up 15 percent and two percent respectively. Both Snapchat and Instagram saw a rise in installs in certain countries. Snapchat installs increased in Germany, Spain, India and Brazil, where the increase was most pronounced at 22 percent year over year. Instagram installs rose in France, Germany and the U.S.

Source:  http://www.cnbc.com/2016/06/06/people-are-spending-much-less-time-on-social-media-apps-said-report.html

It takes less than a minute to opt-out of Facebook's new ads system.

Facebook member or not, the social networking giant will soon follow you across the web -- thanks to its new advertising strategy.

From today, the billion-plus social network will serve its ads to account holders and non-users -- making one giant push in the same footsteps as advertising giants like Google, which has historically dominated the space.

In case you didn't know, Facebook stores a lot of data on you. Not just what you say or who you talk to (no wonder it's a tempting trove of data for government surveillance) but also what you like and don't like. And that's a lot of things, from goods to services, news sites and political views -- not just from things you look at and selectively "like" but also sites you visit and places you go. You can see all of these "ad preferences" by clicking this link.

Facebook now has the power to harness that information to target ads at you both on and off its site.

In fairness, it's not the end of the world -- nor is it unique to Facebook. A lot of ads firms do this. Ads keep the web free, and Facebook said that its aim is to show "relevant, high quality ads to people who visit their websites and apps."

Though the company hasn't overridden any settings, many users will have this setting on by default, meaning you'll see ads that Facebook thinks you might find more relevant based on what it knows about you.

The good news is that you can turn it off, and it takes a matter of seconds.


Head to this link (and sign in if you have to), then make sure the "Ads on apps and websites off of the Facebook Companies" option is turned "no."

And that's it. The caveat is that you may see ads relating to your age, gender, or location, Facebook says.

You can also make other ad-based adjustments to the page -- to Facebook's credit, they're fairly easy to understand. The best bet (at the time of publication) is to switch all options to "no" or "no-one."


Given that this also affects those who aren't on Facebook, there are different ways to opt-out.

iPhones and iPads can limit ad-tracking through an in-built setting -- located in its Settings options.

Android phones also have a similar same setting -- you can find out how to do it here.

As for desktops, notebooks, and some tablets, your best option might be an ad-blocker.

But if you want to be thorough, you can opt-out en masse from the Digital Advertising Alliance. The website looks archaic, and yes, you have to enable cookies first (which seems to defeat the point but it does make sense, given these options are cookie-based) but it takes just a couple of minutes to opt-out.

Source:  http://www.zdnet.com/article/to-stop-facebook-tracking-you-across-the-web-change-these-settings/

Two months ago, the internet blew up over Google AMP (Accelerated Mobile Pages). And, just when you thought you passed the Structured Data Testing tool and implemented your AMP HTML file extensions perfectly, you realize your AMP pages are not showing up in the search results.

Are you asking: Where are my AMP pages in the search results? Does AMP happen in real-time? Or, does it take a few hours for Google to index your AMP pages?

Well, take comfort in knowing you are not alone. Yep, the SEJ team is right there with you.

In true early adopter fashion, SEJ launched our AMP pages on March 29. Since the launch, we have seen traffic to our AMP pages spike from 50 to 700 sessions. High-fives all around!

SEJ Google AMP Traffic

As you can see below in our breakdown of AMP traffic, the majority of our traffic is coming from play.google.com/newsstand.

Google AMP Referrers for SEJ

However, when we search in the search results, our AMP pages are nowhere to be found. Sound familiar? Others are having the same issue.

So, it’s no wonder that only 25% of SEO professionals have taken steps to implement AMP. And, while there may be more legwork involved in creating AMP pages, Google has created a new problem: With countless hours spent, it’s hard to know when and where your AMP pages exist in the SERPs.

To help you cut through the noise, we’re going to share our experience here at SEJ. But, first, let’s give you some insight on how we assembled our AMP pages at SEJ.

Implementing AMP

For implementing AMP, Vahan Petrosyan, our lead developer here at SEJ, used the following plugins:


Glue for Yoast SEO & AMP

But, you will likely need a little more than a plugin if you want to implement AMP. Here is what Vahan had to say about using plugins alone:

“Simple installation of plugins gives only standard and limited functionality of customizing posts. I did custom development using that plugins actions hooks and modified standard look for our AMP pages. We modified styles, added slide menu from right using pure CSS3 techniques, sticky share buttons,and modified Google Analytics default tracking code in order to include custom dimensions and event tracking such as share buttons, menu items clicks,etc.”

Here are screenshots of the sleek reading experience on mobile:

SEJ Google AMP Page

SEJ Google AMP Page Example

We’re excited about the new look and interested to see how it affects traffic.

Where are My Google AMP Pages?

Restructuring your mobile website to cater to Google’s new features is just part of the job description when you launch a website, right? But as we’ve come to learn, the theory “build it and they come” isn’t so true. Based on our experience at SEJ, Google AMP pages are not real-time. Since we launched our AMP pages on March 29, 2016, we have not seen them in the search results even though we’re getting traffic to our site from AMP pages. We noticed an uptick in traffic 9 days after launched our AMP pages. So, essentially it took Google 9 days to index our AMP pages.

The SEJ team reached out to Gary Illyes to confirm this. Gary stated in a recent interview with SEJ,

“In general the same applies on AMP results as on normal web results: we have to crawl and index the page in order to show a result for it. Depending on the site this can take from a few minutes to days, but most of the time it’s pretty fast.”

One theory: While Google’s AMP pages are being praised as the fairy godmother to all the small businesses in Internet land, you may not see an immediate boost if you’re not first to implement. I’m noticing a large amount of the bigger publishers (mainly brands that partnered with Google in their beta testing) for AMP seemingly get first dibs in the Google search results carousel.

Now, with Google News featuring up to 14 pieces of AMP content from publishers that have already launched their AMP pages, it’s going to continue to be a race to see who gets their first. This could potentially hurt smaller businesses.

Another theory: Google will only display news articles. These news articles will only show up in the AMP carousel if Google views content as recent news for the topic you’re searching for. The AMP carousel is a Top Stories carousel so perhaps your news articles are newsworthy enough for the carousel. Google will only display Article, NewsArticle, and BlogPosting schema types into the carousel at this time.

If you double checked your schema, head over to the Webmaster Central Forums and/or the AMP Error Reports in Search Console. You can also look at the new AMP filter in the Search Console.

When the SEJ team asked Gary Illyes why we’re not seeing AMP pages expand faster outside of news sites, he responded,

“The format itself is open to anyone who’d like to speed up their sites, it’s not limited to only one kind of site. Currently, we’re testing AMP results with a limited set of publishers, but we’re exploring ways to show AMP results from more sites.”

Also, Google does not guarantee your articles will show in the AMP carousel. It depends on the searcher’s search query and Google’s algorithms of whether or not they display your content, even if you’ve passed the data structuring test and validated your AMP pages.

We also asked Gary Illyes why AMP users are able to get traffic to their site via AMP pages but still not see their AMP pages in the SERPs, Gary stated,

“AMP pages, just like normal webpages, can be accessed in many ways: from bookmarks, from search results, directly from the browser, an so on. If I know that a site is AMP enabled, I will very likely stick the “/amp” to the end of the URLs on the site, and so the publisher will likely see a direct visit in their tracking software. This is not necessarily what’s happening to SEJ, but that’s my best guess.”

Source:  https://www.searchenginejournal.com/long-amp-pages-take-show-search-results/163006/

Columnist Thomas Stern discusses how content marketing performance can fall short if SERP features aren't considered.

Content marketing has become commonplace for marketers today. For those who need guidance, there are numerous handbooks and step-by-step resources that teach the best ways to understand who we’re marketing to and what their needs are.

Most content marketing guides can explain the principles of optimization, as well as the critical role that search engines play in generating content visibility. What the guides fail to address are the many features or types of search results that exist and how they impact the visibility of content that is produced.

These enhancements — special methods of display tailored to match the type of content being shown — encompass a wide range of SERP features, including:

Knowledge Graph;
local pack;
site links; and
Featured Snippets.

In this article, I’ll focus on Google and use Google-specific terminology, though Bing has unique methods of displaying information as well. Let’s explore the most effective ways Google enhances its search results and look at examples of keywords and content types that both generate and populate them.

Google Knowledge Graph

Google Knowledge Graph, one of the most prominent SERP features, appeared in more than one-third of searches in the last 30 days, according to Moz.

Knowledge Graph was originally introduced in 2012 to enhance Google’s search results by tailoring content display to user intent. Today, Knowledge Graph provides a detailed spectrum of information on nearly any topic. The tool serves to reduce the need for consecutive searches and resolve any queries, without the user having to navigate to other websites lower on the SERP.

Google’s recent removal of right-hand-side ads on desktop search resulted in content marketers recognizing Google Knowledge Graph as a tool to improve visibility, in addition to their typical SEO and SEM initiatives for targeted queries. For instance, a search for “coffee creamer” displays images of a popular non-dairy creamer in addition to the brand’s shopping ads and search ads.

coffee creamer

Local pack, reviews and sitelinks:

A more general search term like “coffee” removes context and pushes Google to reveal a wider spread of SERP features, including:

Knowledge Graph of nutritional information sourced from the USDA;
a local pack of nearby coffee shops with reviews; and
a Wikipedia page with additional sitelinks.


In 2013, Google began analyzing search intent for each query when it rolled out its Hummingbird algorithm. Without additional keywords to help the search engine infer intent, the inclusion of the local pack provides context to the “coffee” query in case the user isn’t seeking health information or historical details about coffee.

The local pack features the top three nearby coffee shops. Though the two visible brands are both franchises closest to the ZOG Digital office, distance is not the only factor that potential customers are likely to consider — The Coffee Bean & Tea Leaf has 14 reviews with an aggregate rating of 4.2 stars, while both Starbucks locations combined have one review and no visible ratings.

Users who intend to find value in their searches rely on reviews to inform their actions.

If Google is able to infer a user’s intent, it will tailor its display of results with a spread of features designed to meet that user’s needs. A search for “best coffee” has a lot of value for brands aiming to reach consumers in the purchase consideration stages — brands not only need to invest in paid media like shopping ads and search ads, they also need to earn strong reviews. Conversely, a search term like “best coffee shop” prioritizes local quality and indicates the user’s desire to buy a prepared coffee drink rather than beans to brew at home. In this situation, reviews in the local pack have more value than distance, and the results shown include only the highest-rated nearby coffee shops.

For users seeking more information on coffee, the Wikipedia sitelinks that are visible in the above SERP direct visitors toward their specific need, while allowing them to skip a navigational step. As seen below, sitelinks are typically expanded to include a description for more brand-specific search terms like “Starbucks menu.”

brewing coffee at home

Visible alongside the menu sitelinks is a carousel of menu categories and a link titled “More about Starbucks” that takes users to local Starbucks options. The categories expand to display a variety of menu options that users can access without leaving the SERP.

For restaurant brands, menu PDFs are difficult for search engines to crawl. Google prefers when pages have Schema coding, text cues that give search engines instructions for reading a page. Schema code specific to menu results must also be included on menu pages to populate a carousel, as shown above in the SERP.

As proven in Mediative’s latest SERP study, the top left-hand side of the SERP still has prominent visibility for the user despite mobile devices training a user’s search habits to scan results pages vertically, rather than horizontally. With increased competition for high-ranking search visibility, content marketers can optimize for results like these to earn visibility in other ways besides being the top organic listing.

Featured Snippets

Featured Snippets, as shown above, ensure that Starbucks has prominent content visibility among its competitors for search terms such as “brewing coffee at home.”

To accomplish this, Starbucks provides a blog post structured for readability and usability. It’s also structured its site so there’s prominent navigation for its “How To Brew” category of pages. The comprehensive 901-word article takes users step-by-step through four ways to brew coffee for a few different types of coffee makers. The segment that’s pulled for the SERP is for a common one — how to brew coffee from a coffee press.

Final thoughts

Google has enhanced its search engine to be more visual, informational and user-friendly. As a result, content marketers are presented with a unique challenge that requires new tactics.

With improved competition in the search space, content marketers cannot rely on classic SEO best practices to improve page visibility. Tools like SEMRush’s recently released Keyword Difficulty tool can be used to match which keywords have the potential to display with these enhanced SERP features, but optimizing for the enhanced results page in addition to improving search rankings requires more than identification. Optimizing results beyond the typical blue link results requires additional research and a comprehensive content marketing plan.

Source:  http://searchengineland.com/refine-content-marketing-tactics-enhanced-search-engine-display-249880

Contributor Christi Olson, search evangelist at Bing, takes a deeper look at data feeds and their role in search (past, present and future).

If you haven’t done it already, now would be a great time to take a fresh look — or even a first look — at how data feeds can reinvigorate your online marketing strategy and prepare your business for an undeniable shift in the search space. Structured data feeds are the silent drivers behind a new search experience centered around more conversational, localized and personalized search engagements.

Engines are moving away from the “traditional” customer journey, where users search for a keyword, sort through text PPC search results and click onto a new website. Instead, structured data feeds front-load the search engine results pages (SERPs) with user-rich information, creating a new search experience for more personalized, localized and actionable results.

Data feed = information bus

Feeds, simply put, are mechanisms of structured data that enable either a platform or a person to take action. Some of the original uses of feeds were as mechanisms to automate the pulling of information to a centralized location. They helped streamline access to data and information.

We’ve been using structured data and data feeds without even realizing it. Think RSS and news feeds, weather and traffic reports or TIBCO (The Information Bus Company) of the early internet, which helped digitalize the stock market. Even your weekend “honey-do” list is a type of structured data.

Today, data feeds in search focus primarily on shopping, enabling merchants to structure their product catalogs into standardized file formats. Search engines are then able to access them, correctly understand their context and display them in visually appealing ad formats for consumers. Bing Product Ads, Google Product Listing Ads and Amazon Marketplace have all streamlined this process to make it easy for merchants to market their products online through data feeds. But this is just the beginning.

The better we can communicate and share information through data feeds and structured data, the better receiving platforms, such as search engines, can perform for us.

Advances in data feeds are being made each day to further performance, and also to automate previously manual/daunting tasks. Automation in search is everywhere:

In third-party tools to help search experts optimize their campaigns
In Bing’s merchant feeds to manage entire store catalogs
In Google’s business data feeds to automatically update ad copy
Expect new ad formats — Shopping Ads are just the beginning
As data feeds become more structured and standardized, search engines can better understand, compare and contextualize data, which is resulting in exciting new ad formats within the SERPs.

Movie times, tourist activities and breaking news are all displayed directly in the search engine results. Today’s searchers can comparison shop on Bing with Shopping Ads or book travel directly through Google Hotel Product Ads or through TripAdvisor.

The future of search will include new ad formats that move outside of the e-commerce realm to help simplify the process of consumers taking actions. Think of new ad formats that can streamline processes and tasks like making restaurant reservations, renting a car, getting insurance quotes or requesting information about B2B pricing and services.

Structured data feeds are helping to power this shift, as search becomes about more than just finding information. It’s about gaining knowledge, taking action and relying on the search engines as intercommunicative partners.

Get ready

The sooner you get on the data feed information bus, the better! Businesses across a variety of industries will benefit as search engines better understand, compare and display information for searchers.

At the forefront are retailers, trip planners, hotels and airlines, but it is easy to see how the benefits of structured data feeds will rapidly impact other areas like insurance, real estate, education and beyond. Way beyond.

Optimizing your Bing Shopping data feeds

Bing Shopping Campaigns make it easier than ever to connect with your customers and promote your products online through visually appealing Product Ads. In order to make the most of your Bing Product Ads, we recommend the following:

As you do not bid on keywords within a shopping campaign, you should utilize the optional 0–4 custom labels to segment your products into smaller groups so that you can bid strategically. Include a “top performers” group which will enable you to distribute a higher, more aggressive budget towards your top-performing products. Other attributes may include: Sale Items, Seasonal, Price Ranges, Profit Margins and Stock Levels.

Be sure to utilize negative keywords against all campaigns to minimize irrelevant searches.
Use images strategically by showing products in use, using multiple colors and providing high-resolution photos.
Be very aware of pricing and stay competitive, or add sale pricing to draw attention to your ads.
Use keyword-rich titles which include brand, product type, gender, size and a generic color. A good example: Patagonia Arctic Thermal Winter Jacket, Men’s Size L, Blue.

Refresh your feeds as often as possible, at least every 24 hours.

GTINs — Global Trade Item Numbers — should be added to any products that may be sold by multiple retailers. Google has set a firm May 16, 2016, deadline to include GTINs. Bing has yet to announce a cutoff date, though they promote it as a best practice.

Source:  http://searchengineland.com/move-text-ads-data-feeds-driving-new-search-experiences-249716

When you're stuck in a traffic jam wreathed in fumes or squeezed onto a sweltering commuter train, the promised future of a smart, efficient transport system may seem like a utopian dream.

But optimistic technologists assure us relief from this gridlocked hell is closer than we think.

And it's all down to the "internet of moving things" - cars, buses, bikes, trains, and planes laden with sensors beaming data to a big brain in the cloud.

The better we know where everything is, the better we can manage traffic flows and optimise routes, avoiding congestion, accidents and natural hazards, the argument goes fasteter Deliveries.

"The internet of moving things is giving us whole new sets of data," says Shiva Shivakumar, chief executive of Urban Engines, a specialist in urban mobility data.

"Delivery companies, taxis, travel cards, smartphones, and connected cars are all pushing movement data to the cloud which we can then mash up with real-world maps to create a space/time engine," he says

"Transport providers from Singapore to Sao Paulo can now analyse journeys trip by trip and understand why a bus was late, spot where there is unused capacity or see opportunities for new routes."

Mr Shivakumar, a former Google engineer, says his firm has been able to help delivery companies in San Francisco optimise their routes in real time, testing different scenarios based on current traffic flows and weather conditions.
This type of analysis has led some companies to experiment with mobile delivery hubs, rather than having all goods stored in one warehouse and making all the journeys from there.

Taxi firms now know where the most demand is at each point during the day, even the areas where customers tip the most.

"Experience might tell you one thing, but the data might tell you something else," says Mr Shivakumar.
And in the not-too-distant future, automated travel advisers on our smartphones with access to real-time data from all forms of transport will tell us the best way to reach our destinations, he believes.


Saving lives

Mapping firm Here - recently acquired by German vehicle makers BMW, Audi and Daimler - is busy mapping the road networks of major cities around the world using laser technology, or lidar. It has a fleet of hi-tech camera cars much as Google does.
This kind of technology can perceive road markings, lane widths, and concrete barriers, says vice-president Aaron Dannenbring, to create a "precise, reference index of the road system globally".
"But we also need a dynamic map that reflects everything that's happening on the road. So by connecting other vehicles to our cloud platform we can capture how the traffic situation is changing."

 And as more vehicles are fitted with sensors and cameras, the more accurate and useful these dynamic maps will be, he believes.

"Say a number of cars sense black ice on the road, that data will go to the cloud and be analysed by our algorithms. If a pattern emerges a warning will be beamed down to other cars to inform them.
"We think tens of thousands of lives could be saved each year as a result of these systems."
This internet of moving things will also be crucial to the success of driverless vehicles.


Managing the flow

Rail, too, is benefiting from this kind of movement data analysis.
For example, indoor location start-up Pointr is tracking how people move around railway stations to offer navigation tips and live train updates. It is taking part in the Hacktrain innovation programme.
Such data analysis could aid the design of stations and ticket offices, while the move to digital ticketing and the integration of rail data with other transport data is bringing closer that "magic carpet ride ideal - gently wafting through stations without any barriers or friction," says Mark Holt, chief technology officer of rail ticketing website, Trainline.


Source:  http://www.bbc.com/news/business-36215293

Part of the KevinMD toolkit series.

“How can I find a doctor online?”

A seemingly simple question, but patients are often confronted with too much information on the Internet, with variable quality.

Finding a doctor a similar to completing a puzzle. Like puzzle pieces, there are many resources available, including word of mouth, hospitals, insurance companies, and physician rating sites.

Don’t rely on a single resource, but use them to complement each other. The information available online should lead you to a reputable physician.

Here are some Internet resources to help find a doctor online and research the right doctor for you.

Step 1: Find a doctor online

There are several ways to come up with a doctor’s name online.

Google web search. The simplest, most direct method. Use keywords like [city], [state] with “doctor,” “primary care,” “physician,” “cardiologist,” and the like.

For instance, if I were a new patient looking for a primary care doctor in Nashua, NH, I’d type something like “primary care doctor, nashua nh,” and which would yield a few leads to local hospitals and medical systems to get you started:

Your local hospital. Hospital websites generally have a “find a doctor” page, where you can browse through a physician directory with contact information. Not sure which hospital to choose? Use Medicare’s Hospital Compare where you can compare hospitals based on patient survey results and objective quality measures.

Online physician directories. There are dozens to choose from, summarized nicely on this Medline Plus page. It is particularly strong in consolidating dozens of “find a doctor” pages from the specialty societies. So if you’re looking for a cardiologist, pulmonologist, gastroenterologist or surgeon, start there.

For primary care doctors, I would choose the American Medical Association’s DoctorFinder. Keep in mind that AMA members show up first when queried. It takes a few more clicks to view non-AMA physicians.

Your health insurer. Your health insurer website has online physician directories that, of course, accept your particular health plan. Some tier doctors based on quality measures and whether they practice cost-effective medicine.

Step 2: Research your doctor on the Internet

Now that you have a name, how do you know if your doctor is right for you? A patient-physician relationship today is more like a partnership. And like any partner, a doctor who’s great for one patient may not be the right fit for another. Here are some ways to determine whether your new doctor is a good match.

Determine board certification. Critical. Board certification has generally been shown to be associated with quality of care. The American Board of Medical Specialties created Certification Matters where you can input doctors’ names and determine whether they’re board certified in any specialty.

Find out any disciplinary action. You’d want to know whether your new doctor has been disciplined by a medical board, or involved with malpractice cases. While the availability of that information can vary from state to state, your state’s medical board is a good place to begin.

The Federation of State Medical Boards (FSMB) has also created DocInfo, where you can purchase a physician profile that includes disciplinary action. According to the FAQ:

Whenever disciplinary information exists, this report will provide the name of the state medical board or licensing agency that initiated the action, what type of disciplinary action was taken (such as a license revocation, probation, suspension, etc.,) the date of the action and the basis or reason(s) for the action. The FSMB Physician Profile does not include information on medical malpractice settlements or claims.

The FSMB Physician Profile costs $9.95.

Do you have Medicare? Go to Medicare’s Physician Compare to find out if your doctor takes Medicare.

Determine prescribing patterns. Using information from Medicare Part D, ProPublica compiled the data and created an extremely handy searchable database: Prescriber Checkup. A wealth of information here, including how a provider compares to others in the same specialty and state, average prescription price, and a prescriber’s top-ranked drugs next to each drug’s rank among all prescribers in the same specialty and state.

Physician rating sites. In general, online physician ratings are fragmented across dozens of sites, with most physicians having only a handful of reviews, if any at all. I wouldn’t rely on rating sites by themselves, but they can be a useful complementary piece. Several rating sites having uniformly poor reviews on a doctor can be a red flag, for instance.

The Informed Patient Institute has a tool that reviews and ranks these sites based on whether they’re for-profit or not, and how useful they are to patients. Use this as your starting point when wading into the pool of online physician ratings.

In Massachusetts, the Massachusetts Health Quality Partners and Consumer Reports pooled over 50,000 patient surveys from primary care and pediatric offices and presented the data in a searchable database. If you’re looking for a primary care doctor or a pediatrician in Massachusetts, start here.

Google your doctor. This is a necessity. Often times, doctors won’t have a large digital footprint, and what comes up are their profiles on physician rating sites. But sometimes you’ll find the website of their practice, where you can get a better sense of how the office runs. Other times, you’ll find their social media presence, such as a LinkedIn profile or a Twitter feed. A LinkedIn profile is useful, since it’s an online CV of your doctor. A Twitter feed can reveal a little bit of your doctor’s personality, and probably is the closest to hearing him “speak” before meeting him.

And on occasion, you’ll see your doctor’s mainstream media exposure: newspaper articles and television appearances. Or stories written about him, positive or otherwise.

What shows up on your doctor’s Google search can tip your decision one way or the other if you’re on the fence.

Step 3: Put together the puzzle

Once you have researched your doctor online, use that information in conjunction with other sources. Word of mouth from your friends or calling the practice yourself with questions.

Although not always possible, meet the prospective physician before making your choice. As with any partnership, not all of them are going to work out. It’s critical to see if your personalities, values, and philosophies of care match. That requires a face to face visit.

But with the amount of information available online, you’ll be far better prepared before that first meeting with your new doctor.

Kevin Pho is co-author of Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices. He is founder and editor, KevinMD.com, also on Facebook, Twitter, Google+, and LinkedIn.

Written By: KEVIN PHO


Wednesday, 03 June 2015 07:33


As Wikipedia has become more and more popular with students, some professors have become increasingly concerned about the online, reader-produced encyclopedia.


While plenty of professors have complained about the lack of accuracy or completeness of entries, and some have discouraged or tried to bar students from using it, the history department at Middlebury College is trying to take a stronger, collective stand. It voted this month to bar students from citing the Web site as a source in papers or other academic work. All faculty members will be telling students about the policy and explaining why material on Wikipedia -- while convenient -- may not be trustworthy.


"As educators, we are in the business of reducing the dissemination of misinformation," said Don Wyatt, chair of the department. "Even though Wikipedia may have some value, particularly from the value of leading students to citable sources, it is not itself an appropriate source for citation," he said.


The department made what Wyatt termed a consensus decision on the issue after discussing problems professors were seeing as students cited incorrect information from Wikipedia in papers and on tests. In one instance, Wyatt said, a professor noticed several students offering the same incorrect information, from Wikipedia.


There was some discussion in the department of trying to ban students from using Wikipedia, but Wyatt said that didn't seem appropriate. Many Wikipedia entries have good bibliographies, Wyatt said. And any absolute ban would just be ignored. "There's the issue of freedom of access," he said. "And I'm not in the business of promulgating unenforceable edicts."


Wyatt said that the department did not specify punishments for citing Wikipedia, and that the primary purpose of the policy was to educate, not to be punitive. He said he doubted that a paper would be rejected for having a single Wikipedia footnote, but that students would be told that they shouldn't do so, and that multiple violations would result in reduced grades or even a failure. "The important point that we wish to communicate to all students taking courses and submitting work in our department in the future is that they cite Wikipedia at their peril," he said.


He stressed that the objection of the department to Wikipedia wasn't its online nature, but its unedited nature, and he said students need to be taught to go for quality information, not just convenience.


The frustrations of Middlebury faculty members are by no means unique. Last year, Alan Liu, a professor of English at the University of California at Santa Barbara, adopted a policy that Wikipedia "is not appropriate as the primary or sole reference for anything that is central to an argument, complex, or controversial." Liu said that it was too early to tell what impact his policy is having. In explaining his rationale -- which he shared with an e-mail list -- he wrote that he had "just read a paper about the relation between structuralism, deconstruction, and postmodernism in which every reference was to the Wikipedia articles on those topics with no awareness that there was any need to read a primary work or even a critical work."


Wikipedia officials agree -- in part -- with Middlebury's history department. "That's a sensible policy," Sandra Ordonez, a spokeswoman, said in an e-mail interview. "Wikipedia is the ideal place to start your research and get a global picture of a topic, however, it is not an authoritative source. In fact, we recommend that students check the facts they find in Wikipedia against other sources. Additionally, it is generally good research practice to cite an original source when writing a paper, or completing an exam. It's usually not advisable, particularly at the university level, to cite an encyclopedia."


Ordonez acknowledged that, given the collaborative nature of Wikipedia writing and editing, "there is no guarantee an article is 100 percent correct," but she said that the site is shifting its focus from growth to improving quality, and that the site is a great resource for students. "Most articles are continually being edited and improved upon, and most contributors are real lovers of knowledge who have a real desire to improve the quality of a particular article," she said.


Experts on digital media said that the Middlebury history professors' reaction was understandable and reflects growing concern among faculty members about the accuracy of what students find online. But some worry that bans on citing Wikipedia may not deal with the underlying issues.


Roy Rosenzweig, director of the Center for History and New Media at George Mason University, did an analysis of the accuracy of Wikipedia for The Journal of American History, and he found that in many entries, Wikipedia was as accurate or more accurate than more traditional encyclopedias. He said that the quality of material was inconsistent, and that biographical entries were generally well done, while more thematic entries were much less so. Like Ordonez, he said the real problem is one of college students using encyclopedias when they should be using more advanced sources.


"College students shouldn't be citing encyclopedias in their papers," he said. "That's not what college is about. They either should be using primary sources or serious secondary sources."

In the world of college librarians, a major topic of late has been how to guide students in the right direction for research, when Wikipedia and similar sources are so easy. Some of those who have been involved in these discussions said that the Middlebury history department's action pointed to the need for more outreach to students.


Lisa Hinchliffe, head of the undergraduate library and coordinator of information literacy at the University of Illinois at Urbana-Champaign, said that earlier generations of students were in fact taught when it was appropriate (or not) to consult an encyclopedia and why for many a paper they would never even cite a popular magazine or non-scholarly work. "But it was a relatively constrained landscape," and students didn't have easy access to anything equivalent to Wikipedia, she said. "It's not that students are being lazy today. It's a much more complex environment."


When she has taught, and spotted footnotes to sources that aren't appropriate, she's considered that "a teachable moment," Hinchliffe said. She said that she would be interested to see how Middlebury professors react when they get the first violations of their policy, and said she thought there could be positive discussions about why sources are or aren't good ones. That kind of teaching, she said, is important "and can be challenging."


Steven Bell, associate librarian for research and instructional services at Temple University, said of the Middlebury approach: "I applaud the effort for wanting to direct students to good quality resources," but he said he would go about it in a different way.


"I understand what their concerns are. There's no question that [on Wikipedia and similar sites] some things are great and some things are questionable. Some of the pages could be by eighth graders," he said. "But to simply say 'don't use that one' might take students in the wrong direction from the perspective of information literacy."


Students face "an ocean of information" today, much of it of poor quality, so a better approach would be to teach students how to "triangulate" a source like Wikipedia, so they could use other sources to tell whether a given entry could be trusted. "I think our goal should be to equip students with the critical thinking skills to judge." 


Source: https://www.insidehighered.com/news/2007/01/26/wiki 

Page 6 of 6


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.