Web Directories

Rachel Bilson

Rachel Bilson

It’s not even 2017 yet, and it seems that Apple and Samsung will launch two of the most impressive smartphones of the coming year. Both the iPhone 8 and Galaxy S8 are going to feature brand new designs, as Apple and Samsung are working on all-glass devices that are supposed to support similar features. For the first time, both the new iPhone and Galaxy S models will come with OLED displays that incorporate fingerprint scanners and other sensors. Curved screens are reportedly in the works for both devices, and so are dual-lens rear cameras.

But what about the selfies cameras? A new report says the new Galaxy S model is going to have an even better front-facing camera with a feature that has never been seen before on an iPhone. 

Korean site ETNews reports that Samsung has decided to equip the Galaxy S8 with an auto-focus front-facing camera that will let users take even better selfies.

“People are starting to take more selfies and number of demands for cameras that take selfies with higher qualities is increasing,” an unnamed representative for the industry said. Samsung is therefore rumored to add autofocus to the camera to differentiate its flagship phone from competitors.

Samsung has figured out a way to add autofocus to the front camera without increasing the size of the camera module or the thickness of the phone. ETNewssays that the Galaxy S8 will use an “encoder” method that has coils at the side, rather than a Voice Coil Motor that’s used in rear cameras for autofocus. Samsung has not commented on the matter, saying that it “cannot discuss any information regarding new products that are not commercialized yet.”

It’s unclear at this time whether the FaceTime camera in the upcoming iPhone 8 will have autofocus or not. Current and past FaceTime camera versions do not have autofocus. The Galaxy S8 is expected to launch in early March next year, while the iPhone 8 will likely hit stores in mid-September.

Author:  Chris Smith

Source:  http://bgr.com

Traditional media sources such as print medical journals remain key ways for physicians to stay abreast of new and rapidly changing clinical information, according to an annual new report by CMI/Compas.

In October, CMI/Compas, the media planning and buying company, surveyed 2,780 healthcare professionals in the U.S. across 27 specialties, and released a report that focuses on the findings of six: primary care, cardiology, oncology, neurology, dermatology, and pulmonology. The firm included oncologists in the report for the first time this year to address the rising development of products in the category, said Dr. Susan Dorfman, chief commercial officer at CMI/Compas.

What the company found is that pharmaceutical and medical device companies, when seeking to reach physicians, should choose different media channels based on their needs.

“We should never put our eggs in any one channel basket,” said Dorfman. “Doctors are in a multitude of places. For clients who have a limited budget, we have to focus our investments and decide where we can make the most impact on those non-personal dollars as opposed to spreading those dollars too thin.”

Medical journals remain top sources for information but HCPs go online for immediate answers.

Traditional media sources such as print and online medical journals, medical conventions and meetings, and professional websites remain the top sources of information for physicians looking to stay abreast of medical developments and treatment options. The survey found that oncologists rank print and online medical journals equally important, with 70% of them using both.  And while other specialties did not rank pharma reps among their top sources of medical information, 53% of PCPs said they turn to pharma reps to stay abreast of medical developments and treatment options.

“I think PCPs are in many ways equipped to know everything that is happening,” said Dorfman. “You are their first point of contact. It's nearly impossible for them to read everything so having reps to share information ... may be a higher ranking source than other specialties.”

However, those traditional media sources are ranked more highly when physicians have more time. When they have only ten minutes or less to answer a question, physicians across all specialties relied on the internet to find an immediate answer. The survey also found that 70% of HCPs across all specialties search online daily, with 46% of oncologists using online search engines for professional purposes at least four times per day.

“It used to be peers, but now they're searching the web,” said Dorfman. “Now we see that for certain specialties like oncology, peers don't even exist. There's no specific sources they go to though.”

With the internet as the main source of information for physicians when they are searching for an immediate answer, Dorfman said pharma companies should boost their search engine optimization to provide the information and content physicians need.

CMI/Compas found that physicians across all specialties visit brand-specific pharmaceutical or medical device websites to search for dosing information, safety information, and clinical data. However, the information is not easy to find.

“We have to ask when we build our website, who are we building it for?” said Dorfman. “Rather than driving them to the website and forcing them to search, we need to deliver them that information because they have less than ten minutes.”

Drugmakers that are doing this well are bringing the user to the content by incorporating features like keywords, she said.

“We're just starting to work with agencies in sharing a lot of this information, but I haven't seen a brand.com that is constructed in a way that is meaningful to the user yet,” said Dorfman. “I think there are non-brand.com sites that are more geared towards that.”

Clinical efficacy more important than drug cost.

Despite growing criticism about how drugs are priced, the number one factor that influences physicians' treatment decisions across all specialties, except neurologists, is clinical efficacy data, the survey found. Neurologists ranked a drug's safety and tolerability profile as the number one issue, followed by strong clinical efficacy data. Of the six specialties, only PCPs – 47% of them – factor in a drug's cost to a patient when making a prescribing decision.

“It may have to do with the severity of the condition and the conditions that they treat,” explained Dorfman. “What those specialists are looking at is what is going to work for their patients versus looking at the possible cost. It doesn't mean it's not important. It just means that, in the ranking, it wasn't up at the top.”

Oncologists, cardiologists, and neurologists are most likely to try new treatments.

The survey found that oncologists, cardiologists, and neurologists are most likely to prescribe new treatments for patients as soon as those therapies receive FDA approval, with oncologists being the most likely – 58% said they would try out new treatments.

“There is such a high unmet need to extend a patient's life, it's not surprising that they are most likely to try new treatments,” said Dorfman. “It's become much more important for us to be responsible for creating the awareness with these audiences. If we know there is a high propensity, it's on us to deliver relevant information to them.”

Sales reps' access to doctors has stabilized or opened up but with restrictions.

The survey found that PCPs, cardiologists, and dermatologists were most accessible to pharma and device reps without restrictions, and oncologists and pulmonologists were the least accessible.

Looking at rep access over a four-year period, Wayne Obetz, CMI/Compas' VP of investments and analytics and decision sciences, observed a dip in accessibility, a trend also reported by ZS Associates in its annual survey, which found that only 44% of physicians will meet with sales reps.

“The offices that just flat out won't see a rep have bottomed up,” said Obetz. “The offices that are opening back up look like they are opening back up by appointment only, or within opening hours.”

“Even when the rep has access, the overall time is limited,” added Stan Woodland, CEO  of CMI/Compas. “So non-personal promotion becomes increasingly important in the success of a brand.”

Author: Virginia Lau

Source: http://www.mmm-online.com/

When you start a business and promote it on search engines and social media, you can start with initial tips and tricks for the beginners that you will find all over the internet. They will help you in establishing the initial name of your business on the web. Unfortunately, the beginners’ strategies stop working after a very short period of time. It gets really frustrating to carter the search engines with new information. In the past few years, search engines like Google have revamped their whole search ranking system a number of times which has made the situation ever harder. 

Fortunately, there are still some hidden white hat tricks that you can use to re-establish the search engine ranking of your business and promote it properly on the social media as well. The following tips will help you catching the right eyes in the haystack of non-converting visits to your website or page.

Make the menu items SEO friendly

Search Engine Optimization is the most important aspect of any online business. You cannot ignore the power that search engines have over the visitor stats. Most of the marketers and SEO experts think that it is okay for a website to have complex navigation. They actually ignore the fact that these navigation links are much more than some glorified links. If proper SEO tactics are applied to the anchor links and images in the navigation menu, a significant impact will be visible while converting a visit to an action

Google has some preferences about the number of words

Though Google has never published any reports based on which someone can tell that if longer or smaller articles are preferred. Still, the ranking mechanism works better for pages with more and significant words. The highest ranking pages have between 1200-2000 words in general. So if you have any doubts about the content on your website, prefer to go long. 

It is a misconception that people do not like to read. A lot of clients will try to find as much information as possible about the product you are promoting before actually buying it. Instead of letting them go to some other review site, it will be better to include maximum possible information about your product on the website itself.

Google understands you by your neighbors

We often try to get as many outbound and internal links. Make sure you choose to link the right websites related to your work. Linking any information to an authority site is always a good way of bringing credibility for your content. 

Offer a unique platform for researchers

Researchers around the world are often looking for different ways to find information related to their work. Also, once the research is completed, they try to find platforms to promote it as well. You can give an opportunity. You can either share the ongoing research so that they can find participants or publish a whole paper on their research. In return, ask them to give you the much-coveted .edu or high authority link back. 

Ask for link backs 

If you are selling a product or a service, there are a lot of people around who like to review them. If you are selling on e-commerce platforms, there is a very big chance that someone is promoting your product on their website. The search engines tools available for the website owners are very strong these days. You can find information about who is promoting or writing about your product of service and choose the best options for a link back. Request them for a link to your website and in most of the cases, they agree for the sake of better search engine ranking for their pages.

Give your customers some outrageous offer

How about 90% off or buy 1 get 3 offer? They look attractive and in most of the cases find space in the coupon or discount related websites. Make the deal as crazy as possible to attract more eyes and you will see the difference in the sales in no time. If you are running an informative blog, hold a giveaway contest. 

Describe your images properly

There are two ways to describe the images. One is for the viewers and second is for the search engines. The information about your product should be easy to understand and have good optimizable words. People do not search for heavy words and often are not well versed with the dictionary as well. Always go for the mass and keep the quality up.

In the case of the search engines, the crawlers often look for alternate texts for the images and anchor links. Make sure you write a small 5-10 word description with every image. It will increase the penetration of the tags for your website.

Do not concentrate on basic keywords

When it comes to search engines, most of the people use different keywords and phrases to search for a product. It is important to make sure the top basic keywords are included in the product page but at the same time, the non-conventional keyphrases related to the product are also important. Make sure to include at least 3-4 such keywords so that people can find the product easily.

Keep analytics handy

The Google analytics is one of the best ways to improve SEO. The Google’s search system loves to promote pages with the content that viewers love. The incoming signals of the visitors are a clear indication of how your content is working. You can emphasis on the niches that are more popular and improve the quality of the website accordingly.

Big words hurt

Have you ever heard of readability score? The highest ranking pages have the score of 70+ and even 75-78. It determines if the page is understandable to people with lower education or not. If the score is 70+ it is understandable to anyone who has passed grade 4. If the score is 50 or less that means the words and sentences are complicated enough even for a 9th-grade pass out. So keep this benchmark in mind and keep the content as simple and easy to understand as possible.

Author:  Lee Park yoo


Monday, 14 November 2016 12:57

Facebook video pays off

Mark Zuckerberg’s drive to “put video first” is also putting money in Facebook’s pockets. The more organic videos Facebook users watch, the more high-priced video ads Facebook can slip into the feed. Now Facebook’s strategy around auto-play video, paying Live content producers and offering more creative tools is helping to propel its massive revenue growth.

Facebook revealed yesterday during its strong quarterly earnings call that in the last year, Facebook’s average revenue per user grew 49.1 percent in the U.S. and Canada — Facebook’s home market where advertiser concentration, buying power and fast mobile networks make video and video ads popular. That’s compared to 35 percent growth worldwide. The U.S. and Canada’s ARPU grew 9.1 percent this quarter, faster than any other market.

In terms of viewership, Facebook has declined to share a stat since it announced 8 billion daily 3-second-plus views a year ago. But viewership has likely been growing dramatically, because as Mark Zuckerberg said on the earnings call:

“What is enabling video to become huge right now is that fundamentally the mobile networks are getting to a point where a large enough number of people around the world can have a good experience watching a video. If you go back a few years and you tried to load a video in News Feed it might have to buffer for 30 seconds before you watched it, which wasn’t a good enough experience for that to be the primary way that people shared. But now you can — it loads instantly. You can take a video and upload it without having to take five minutes to do that.”


The rise in video viewership also comes thanks to sharper cameras, bigger screens to watch on, better video creation tools and professional and amateur creators getting the hang of the mobile format.

Facebook’s begun adding Live video filters and effects, augmented reality selfie masks, overlaid graphics and more, built off of its acquisition of AR lens startup MSQRD. These are closing the feature gap between Facebook and its competitor, Snapchat.

While many believed Snapchat would steal Facebook’s users, the percentage of Facebook’s monthly visitors who come back daily has actually increased slightly since the rise of Snapchat in 2014. Holding steady at two-thirds of its user base, this stickiness stat is impressive for a 12.5-year-old utility.


Continued user count growth, engagement and the ability to earn more per user via video ads has contributed to Facebook’s $7.01 billion in revenues this quarter, up 59 percent year-over-year, and its $2.35 billion in profit. Essentially, Facebook’s soft pivot to video worked.

Normalizing the video feed

Back in 2013, seeing video in the News Feed was rare. Uploading to Facebook was clumsy, and whether the clips were native or from YouTube, they took a click and some load time to start watching.

That’s why people were downright angry about the whole idea of Facebook planning auto-play video ads. The Wall Street Journal trumpeted “Facebook Moves Cautiously on Video Ads,” delaying their roll-out. And rightfully so. Without much organic video content, video ads would have stuck out like sore thumbs.

Woman holding domestic product emerging from television, portrait

Yet suddenly over the course of 2014, with the roll-out of auto-play and the rise of the ALS Ice Bucket Challenge video meme, organic videos became more and more prevalent in the feed. Meanwhile, advertisers started to get the hang of the format. They cut the intros and went straight to the action, adopting eye-catching visuals and subtitles to make up for the fact that they played silently unless tapped.

Facebook COO Sheryl Sandberg said yesterday that “P&G is creating mobile video ads designed to grab attention in the first few seconds. He shared the example of Tide. In a typical TV ad, they start with a clean dress or shirt, then show it getting stained, and then cleaned with Tide. On mobile, they need to communicate the product value quickly, so they start by showing Tide cleaning a stained garment.”

Masked by the surrounding organic content and designed for Facebook instead of TV, video ads became a normal part of the News Feed. That gave Facebook the freedom to show more of them, both in the feed and as suggested videos after you watched another, without people getting too pissed off.


Now Facebook is putting its connections with 4 million advertisers behind video. That includes big brands. As Sandberg said yesterday, “GM’s subsidiary Holden used Carousel Ads with video to maximize its sponsorship of Australia’s premier rugby tournament. Holden created a video series about their support of youth rugby. The ads generated an 8-point lift in brand favorability for the overall audience — and a 15-point lift amongst their target audience of women over 35.”

Facebook is also bringing small businesses to the video format. Sandberg explained that “For many small businesses, the shift to mobile means leveraging video for the very first time. Rather than needing a camera crew and production budget, anyone with a smartphone can shoot a video and share it on Facebook. In the past month alone over 3 million small businesses have posted a video on Facebook, including organic posts and ads.”


Compared to less vivid text and photo posts, Facebook can charge more for video ads without using up more space. CFO David Wehner said yesterday that “The average price per ad increased 6 percent in Q3.” Adtech firm AdRoll’s CMO Adam Berke agrees that video is pushing that increase. He tells TechCrunch, “Video ads garner a higher CPM than other ad formats, so that will certainly help drive revenue growth…We’re seeing interest in these types of video ad formats from our install base of over 25,000 businesses that never would’ve bought TV ads.”

Snapchat, Twitter and other services are also trying to cash in on video, where YouTube and Facebook have become dominant.


Snapchat’s vertical layout allows for full-screen ads that can feel more impactful and convenient than Facebook’s typically landscape videos. People also typically watch Snapchat with the sound turned on so videos automatically play with audio, unlike on Facebook. People purposefully visit YouTube to watch a specific video, so they’re willing to sit through pre-roll ads. And Twitter is becoming a home for premium video streams like the NFL and presidential debates, which draw advertisers.

But Facebook has several advantages of its own. Its 1.79 billion user reach is appealing to TV advertisers seeking scale. Meanwhile, its success the last five years has financed a leading artificial intelligence research team that Facebook is applying to make sure videos and video ads reach the right people.

Zuckerberg noted yesterday that “There’s a whole thread of work that we’re doing on visual understanding. Right, so understanding photos, what’s in photos, what’s in videos, what people are doing. There’s some deeper AI research that we’re doing…that can apply to things like ranking for News Feed and Search and ads and all of our systems more broadly.”

Facebook gets paid when its video ads work, and AI will help them target the people they’re most likely to work on.


When Facebook popularized the feed-based social network people browse to discover content, it became a home to colorful brand ads. As users first shifted to mobile, it attracted app install ads from developers desperate to rise out of the crowded app stores. Now as mobile data networks strengthen to support high-bandwidth content, Facebook has built a powerful distribution network that video advertisers want to join.

As Sandberg concluded yesterday, “When we think about video ads and what platform they run on, we really believe that over time the dollars will shift with eyeballs and our goal is to be the best dollar and the best minute people spend measured across channels.” The numbers say those dollars have arrived.

Source: techcrunch.com

Good news for security teams in businesses and government organizations all around the world, Matchlight is now available for public use.

Terbium Labs has recently announced the release of the automated data intelligence system for general use.

Here is how the release of Matchlight is going to affect data theft, which is one of the building blocks of the dark web.


Ever since the inception of its beta version in June 2015, Matchlight sparked the interests of numerous security firms within organizations as it offered very innovative and highly effective information security measures.

Following its public release, small and medium-scaled businesses can finally catch a reprieve as it offers highly effective information security measures at a fraction of the original cost.

Prior to Matchlight, detecting an information breach on the dark web takes an average of 200 days in addition to numerous resources and the manpower required to track down leads.

In comparison, Matchlight takes a matter of minutes to detect an information breach with pinpoint accuracy and runs 24/7.

The automated data intelligence system can be privatized within organizations using data fingerprinting technology that allows the user to create a one-way digital signature in order to protect any sensitive information that is detected on the dark web.

Matchlight is based on a massive dark web search engine which scans every recess of the encrypted platform for any information it has been programmed to detect.

Matchlight poses a significant threat to the growth and continuity of dark web which is heavily based on stolen credit card information among other sensitive organizational data.

Round-the-clock monitoring and the low cost affordability of the information security software remains a tantalizing prospect especially for small and medium scale businesses with limitedresources that undergo a significant number of attacks from hackers.

Matchlight to Improve Data Breach Response Time

Matchlight is an automated intelligence system from Terbium Labs which allows a company to monitor their most critical information through a user-friendly API.
Matchlight is an automated intelligence system from Terbium Labs which allows a company to monitor their most critical information through a user-friendly API.

Data response delays can have adverse effects on an organization especially following the breach of sensitive information by hackers and other malicious parties on dark web.


A speedy response is the key to controlling the damage caused by the loss of an organization’s sensitive information and Matchlight seeks to drastically shorten the amount of time it takes to detect a data breach.

Organizations using the information security software will have at their disposal swifter data breach detection, round the clock monitoring and enhanced privacy which, when compared to hiring teams of data analysts and security specialists, is a lot more cheaper.

Automated System Constantly Updates as the Dark Web Expands

One notable edge Matchlight will have over tradition data breach intelligence is that the information obtained will be 100% authentic.

Digital signatures play a huge role in ensuring that the users only get alerted when fingerprints of the monitored information becomes available on the dark web.

As for the rapidly expanding dark web, the automated information security system is well-equipped to keep up with the growth of the dark web, enabling you to gain access to numerous data sources.

Better yet, organizations will benefit from the user-ready information which saves a lot of time and resources spent to crunch raw data into something that can actually be of use to the organization.

Key Features

Included in its suite of services are the following features:

  • Retrospective search.
  • Data analysis for enhanced data monitoring.
  • Live data feeds which include the monitoring of highlighted keywords, credit information and identification numbers.
  • Private exact-string fingerprint monitoring with a resolution of as low as 14 characters.

Matchlight currently attracts a monthly fee of $5 per record and enables search access for up to 600 records every month.

Terbium Labs has effectively stepped up the war against information theft and possibly commenced the decline of the dark web itself.

Source : darkwebnews

Ever thought of working for Google? THE Google? According to the Fortune 100 list of 100 Best Companies to Work For, Google sits at the top spot followed by ACUITY and The Boston Consulting Group.

The American search engine firm is famous for its numerous perks and packages that any employee would find highly attractive. Interested? You're not the only one so here's a a list of questions that the company has asked potential employees to test their thinking under pressure - a skill that is a must for Google employees.

1. 'How many piano tuners are there in the entire world?'

The question was asked to a product manager and is a classic example of the Fermi problem which involves multiplying a series of estimates which would give the correct answer provided the estimates are correct.

2. 'How much would you charge to wash all the windows in Seattle?'

Also for a product manager, who answered $10. The trick is to come up with an easier answer than what's needed.

3. 'Estimate the number of tennis balls that can fit into a plane'

This intern might have had the shock of his/her life being asked this.

4. 'What do you know about Google?'

One of the more simple questions, this was asked of an Administrative business partner as he was tested of critical thinking.

5. 'Why are manhole covers round?'

A software engineer answered, "So the cover doesn't fall through" which called for immediate review of his mathematical skills.

6. 'Design and evacuation plan for this building'

How would you design a place you've only seen once? Maybe twice? The business analyst applicant had the same questions

7. 'Name a prank you would pull on x manager if you were hired.'

Applications support engineer - one of the more lighthearted questions where one is tested on being able to loosen up.

8. 'List six things that make you nervous'

Because the position of Android support level 3 requires applicants to have grace under pressure

9. 'Tell me a joke'

We'll leave it to your imagination if this executive assistant candidate got the room cracking or not.

10. 'How many times a day does a clock's hands overlap?'

Quick question asked of a product manager. The answer: 22

11: 'Describe AdWords to a 7 year old'

Associate account strategist - this and other similar questions test how one can simplify complex concepts.

12: 'What would you do if you didn't have to work?'

Interaction designer - One could easily answer with their deepest desires and passion and that could be what the firm is looking for.

Source : khaleejtimes

Credit where credit is due: Apple’s A-series chipsets are pretty impressive. Despite only recently stepping up to the quad-core table, Apple’s processors have traditionally stood their ground very well against the likes of hexa-core and octa-core SoCs from the likes of Qualcomm, Samsung and MediaTek. So how does Apple do it?

The Linley Group set out to find out just that. The chip research group tasked teardown experts Chipworks with disassembling the A10 Fusion chip and explaining the secret sauce that makes Apple processors so formidable.

The iPhone 7 outscores even some low-end PCs.

According to Linley Gwennap, the director of The Linley Group, “Apple’s investment in custom CPU design continues to pay off, as the new iPhone 7 delivers better performance than any other flagship smartphone and outscores even some low-end PCs.”

According to Gwennap’s research, the Apple A10 used in the iPhone 7 is notably faster than the Samsung Exynos 8890, the Qualcomm Snapdragon 820 and the Huawei Kirin 955.

The A10 delivers “nearly identical performance” to Intel’s Skylake processors.

Furthermore, Gwennap notes, “Apple’s new CPU actually compares better against Intel’s mainstream x86 cores,” claiming that the A10 delivers “nearly identical performance” to Intel’s Skylake processors, primarily due to its high performance Hurricane architecture.

Gwennap even forecasts an ominous future for Intel: “Apple’s CPU prowess is beginning to rival Intel’s. In fact, the new Hurricane could easily support products such as the MacBook Air that today use lower-speed Intel chips.”


The A10’s Hurricane cores do all the heavy lifting while the Zephyr (which translates to “light breeze”) cores perform energy efficient tasks, following ARM’s big.LITTLE architecture. Hurricane reportedly delivers 35% better performance over the Twister cores found in the A9, “boosting both the clock speed and the per-clock performance”. Meanwhile, the Zephyr cores reduce battery consumption.

It might be easy to ignore all this as typical Apple-focused grandiloquence, but Gwennap has data to back up the claims. Chipworks pulled apart the A10 chip, exposing its dual Hurricane and dual Zephyr cores and found some interesting things that set the A10 apart from the competition.

The A10's Hurricane cores are about twice the size of other high-end mobile CPUs.

The biggest revelation was just how huge the Hurricane cores are. At 4.18 mm2 they’re “about twice the size of other high-end mobile CPUs”. Even the smaller Zephyr cores are much larger than their low-power counterparts – “nearly twice as large as Cortex-A53”. But does size really matter?


In this case, it does. According to Gwennap’s research, “die size is an important metric, since it drives both cost and power”. And herein lies the crux of the argument. Despite their gargantuan size, Apple cores are not necessarily more powerful than other chipsets per square meter. But Apple chips “make up for it in efficiency per clock cycle, thanks to a better “instruction per clock” rate”.

Apple’s advantage is its ability to spend money.

There’s a lot of technical stuff going on in the analysis I won’t bore you with, but Gwennap essentially boils it all down to something we all already know:

“Apple’s advantage is its ability to spend money. Die area is expensive for a processor built in leading-edge 16nm FinFET technology….Because Apple sells phones, not chips, adding a few dollars of die cost is of little importance if the resulting high performance enables it to sell more $600 products.”

Whether you believe throwing money at performance is the right move compared to keeping costs down and optimizing everything instead (remember, Apple chipsets don’t always have the best performance per square meter), Apple has at least demonstrated its approach makes a lot of money in the end.

Who do you think makes the best chipsets? Is speed or stability more important to you?

Source : tabtimes.com

Google is helping to power a new search engine built on a daily scan of the whole Internet.

Early this week the Austrian security company SEC Consult found that more than three million routers, modems, and other devices are vulnerable to being hijacked over the Internet. Instead of giving each device a unique encryption key to secure its communications, manufacturers including Cisco and General Electric had lazily used a much smaller number of security keys over and over again.



That security screwup was discovered with the help of Censys, a search engine aimed at helping security researchers find the Internet’s dirty little secrets by tracking all the devices hooked up to it. Launched in October by researchers at the University of Michigan, it is likely to produce many more hair-raising findings. Google is providing infrastructure to power the search engine, which is free to use.

“We’re trying to maintain a complete database of everything on the Internet,” says Zakir Durumeric, the University of Michigan researcher who leads the open-source project.

Censys searches data harvested by software called ZMap that Durumeric developed with Michigan colleagues. Every day Censys is updated with a fresh set of data collected after ZMap “pings” more than four billion of the numerical IP addresses allocated to devices connected to the Internet. Grabbing a fresh set of that data takes only hours.

The data that comes back can identify what kind of device responded, as well as details about its software, such as whether it uses encryption and how it is configured. Searching on Censys for software or configuration details associated with a new security flaw can reveal how widespread it is, what devices suffer from it, who they are operated by, and even their approximate location.

Steve Manzuik, director of security research at Duo Security, says that Censys should help make the Internet more secure. His researchers used the tool in their investigation of a major security flaw on computers sold by Dell revealed last week.

Dell had to apologize and rush out remediation tools after Duo showed that the company was putting rogue security certificates on its computers that could be used to remotely eavesdrop on a person’s encrypted Web traffic, for example to intercept passwords. Duo used Censys to find that a Kentucky water plant’s control system was affected, and the Department of Homeland Security stepped in.

Censys was born after Durumeric and colleagues found themselves deluged with requests to run scans to help measure new problems. This March they helped with the response to a major encryption flaw affecting some five million websites including those of Apple, Google, and the FBI(see “Probing the Whole Internet for Weak Spots”).

It has competition in the form of a commercial search engine for security researchers called Shodan, which uses a similar methodology but different software. Durumeric says head-to-head tests show Censys offers significantly better coverage of the Internet and fresher data, making it better suited to measuring and responding to new problems.

John Matherly, founder and CEO of Shodan, says he doesn’t think his coverage is much different, and notes that Shodan currently probes IP addresses in a wider variety of ways than Censys, for example looking specifically for certain types of control system.

Those behind Censys and Shodan can agree that making it easier to ferret out flaws in the Internet should make it more secure. Matherly says his tool has led to over 100,000 industrial control systems being properly secured and helped with the shutdown of numerous servers used by criminals to control malware.

Source : technologyreview

After the third and final presidential debate of 2016, the only uncertainty remaining in this race is what Donald Trump will say in his concession speech.

Restrained (for him), disciplined (for him), Donald Trump got through a little more than one hour of television without major incident. Then of course it all went wrong.

But how much did it matter that it all went wrong? The election is shaping up as … not close. What constituency will exist after November 8 for Donald Trump’s complaints about the media, the voting machines, and the zombie voters of Pennsylvania? Most likely, very little.

The more future-relevant takeaways from the debate in Las Vegas debate concern Hillary Clinton, and the kind of president she’ll be. I noted four over the course of 90 minutes.

First: She really does intend to try to raise taxes. Without much prompting, she worked her own way to her version of “go where the money is”—in reply to a highly generic first question. “Cutting taxes on the wealthy—we tried that,” shesaid, omitting that of course the upper-income Bush tax cuts expired in 2013. The top tax rates are higher today than at any time since the early 1980s. If it’s up to President Clinton, they’ll go higher still.

Second: She’s gearing up for an early big battle over immigration, the first 100 days. This is not news of course. But here was a chance to soften or modify the commitment. She doubled down instead on a pathway to citizenship for all but “violent” criminal undocumented immigrants. Will the battle over this plan define her first year?

Third: Trade tensions will intensify no matter who is elected president. The U.S. has imposed anti-dumping duties on Chinese aluminum since the spring of 2011. Hillary Clinton detoured to reference aluminum dumping as a continuing problem. She reaffirmed a pledge against the Trans-Pacific Partnership: “I’m against it now. I'll be against it after the election. I'll be against it when I'm president.”

Fourth: She will reach the presidency as she sought it, deeply cautious and risk-averse. In the third debate, as so often before, she refused opportunities to turn tables, launch surprises, and generally do the unexpected. She answered the questions and answered them competently. She let Donald Trump defeat himself. She jabbed and triggered, but she risked no decisive action. A glimpse of the future?

Source : theatlantic.com


Google’s AI-powered personal assistant has the potential to reshape the landscape of voice search with its machine learning capabilities. However, due to the very nature of AI and machine learning, using Google Assistant comes with a trade-off: your personal data.

Being built into Allo, along with recently announced Pixel phone and Google Home device, the more Google Assistant is called upon the more data it collects from its users.

Of course, this is all in an effort to deliver personal results and provide answers to more personal questions such as “when is my next appointment?”. It’s designed to learn about people’s habits and preferences in order to become smarter and more accurate.

As previously reported, all conversations on Allo are unencrypted. There is the option to turn encryption on, but then you will no longer be able to use Google Assistant within the app.

Over time Google Assistant will learn about where you’ve been, where you’re going, what you like to eat, what kind of music you listen to, how you communicate, who your best friends are, and so on. As Gizmodo points out, it’s even capable of accessing information from anything stored on your device.

While many are embracing the idea of a personalized virtual assistant, it’s important to point out the drawbacks as well. Development of this technology relies on relinquishing your security and privacy to Google.

What Google will do in the future with all this data is anyone’s guess, but part of the company’s business model is to make money through targeted advertising based on user data. In fact, Google has already stated in this help article that it will be doing as much:

”If you interact with the Google Assistant, we treat this similarly to searching on Google and may use these interactions to deliver more useful ads.

Google is certainly not the only company who has a responsibility to serve advertisers. For example, part of Apple’s business model is to serve advertisers through its iAd network.

A major difference is that the iAd network does not collect data from Siri, Apple’s own virtual assistant, nor does it collect data from iMessage, call history, Contacts or Mail. Apple’s CEO, Tim Cook, has confirmed this in the company’s privacy policy:

”We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t “monetize” the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you.”

One of the most important questions people have to think about going forward is: how much privacy and personal data are you willing to give up in order to experience the benefits offered by AI-powered technologies?

Source : searchenginejournal


Page 3 of 4


World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.