fbpx

Web Directories

Wushe Zhiyang

Wushe Zhiyang

It was the biggest bout of 2016 in the digital economy: MakeMyTrip (India) versus Union of India & Ors on September 1 in Delhi High Court. The largest travel website in India had filed a writ petition against the Directorate General of Central Excise Intelligence (DGCEI).

In January 2016, the tax authority arrested a MakeMyTrip (MMT) employee after searches at its Gurgaon premises.

The alleged offence — revenue loss to the government because hotels listed on MMT had not paid service tax. Just six months before this, MMT had been warding off price competition from Ibibo Group, backed by South African conglomerate Naspers; Oyo Rooms, backed by Japan’s Softbank; venture capital-backed Yatra and older nemesis, ClearTrip

Then, suddenly, there was solidarity among travel aggregators, which the high court noted. "There is a common pattern emerging in both cases (MMT and Ibibo) and… the scope of powers of DGCEI requires to be examined." 

Every hotel aggregator was claiming it had more than 25,000 hotels listed on its website (Yatra claimed 36,000). There had been DGCEI searches at the Gurgaon offices of Ibibo and Yatra on January 13 and at ClearTrip’s in Mumbai.

Each company responded with writ petitions claiming the authority’s overreach. One secret all tax advisors know is that the government loves ecommerce to bits. 

At no expense to the government, companies battle each other to build better technology platforms, bringing the informal sector—hotels, taxis, restaurants, single screen cinema halls, shopkeepers—to the digital economy. This helps the government form a money trail that didn’t exist.

Predictably, thus, DGCEI lost the high court case. But it was able to counter MMT’s claim of agreements with more than 30,000 hotels. DGCEI said the website possessed PAN details of only 3,922 hotels, "of which 1,728 were not even registered with service tax authorities."



Though its effort to make MMT liable for listed hotels’ tax losses was wrong, as the court ruled, DGCEI had managed to trace a money trail to less than 2,000 hotels in the informal economy using just one travel website.

The DGCEI offensive was a surprise for online travel aggregators (OTA) in India, which regrouped against the government— and won.

When MMT chairman Deep Kalra spoke with ET recently, he reflected on that phase. "Had we (OTAs) had an association at that time, we could have taken an even stronger stand. When a serious issue hits you, an association can help a company because it has that credibility."

Lack of such a body or think-tank is ailing India’s consumer internet industry, which is estimated by RedSeer Consulting to have brought $45 billion worth of goods and services online last year. (More than 30 per cent of this is because of Indian Railways’ online bookings and the OTAs.)

More remarkably, in a departure from China, the local landscape has both size (331 million internet users) and diversity for customers. This means myriad types of companies and ideologies.

While Google and Facebook dominate their mainstay search and social network business, Google browser Chrome is battling Alibaba-owned UCWeb from China for the Indian smartphone user.

Can internet industry based in metros and diversified across sectors find a cohesive voice?

In etail, Amazon vs Flipkart isn’t the two-horse race it’s often billed as. Even Snapdeal, ShopClues and Paytm users shop online and Alibaba has just launched operations. Since 2010, less than $20 billion has gone into creating online category battles involving more than 3,000 startups.

Even as foreign investors such as Sequoia, Accel, Tiger Global, Alibaba and Softbank increased their India exposure like never before, the industry that was born competed aggressively to build technology and bring informal sectors online.



"Entrepreneurs are yet to come together. They have fallen short in finding that unified purpose," said a venture capital investor in Bengaluru, who requested anonymity.

There were murmurs of Flipkart cofounder Sachin Bansal starting a separate ecommerce body last year but not much has come of it. The think tank vacuum is as conspicuous as the local market is globalised.

Even as late as in 2013, the industry counted on Google’s Rajan Anandan to bring legitimacy to fashion portals from India, such as Myntra. It didn’t matter that Anandan was born in Sri Lanka, or that he is managing director of America-bred Google’s South East Asia and India operations.

For fashion and apparel companies and sellers, a Google guy’s endorsement at Myntra conferences brought credibility to a nascent sector, also because he was then chairing industry body Internet and Mobile Association of India (IAMAI).

It still isn’t rare for the affable Anandan to be a keynote speaker at developers and software-as-a-service conferences in India, as he was part of the September 2015 launch of ed-tech portal Udacity’s launch in India.

He has invested in almost 50 startups in this region, a testament to how Anandan —and, by extension, Google—is culturally entrenched in India. But fissures are beginning to show. Is Anandan a messiah or mercenary? "Google makes money out of the digital economy which Rajan champions," noted the Bengaluru VC investor quoted earlier.

His larger point was, "We need that one guy who people value and respect as an independent voice, who connects and has the concern and a sense of mission-mode to emerge as a leader for the industry."

Can internet industry based in metros and diversified across sectors find a cohesive voice?

Considering the scale, velocity and size of the industry, the time and effort required is huge. "That person has not emerged," said the investor. It has to be a full-time pursuit, as in the case of the late Dewang Mehta for Nasscom (IT industry), Tarun Das (Confederation of Indian Industry) and, more recently, Sharad Sharma at iSPIRT (software product industry). In Delhi’s lobbying circles, the internet industry is seen as a spoilt brat.



Sample a couple of perceptions — the money is drying but entrepreneurs’ capabilities haven’t gone up dramatically. Second, this is a two-sided market where buyers and sellers are getting subsidised for market share. 

"They (Ecommerce companies) should all sit in a room — if one company decides to stop such subsidies, others need to agree as an ‘association,’ that they won’t allow contrary practices because they are anti-competitive. Right now, ‘Indian vs American’ or ‘Chinese vs Indian’ is an outcome of lack of unity, which no current industry association can fix," an industry observer in Delhi explained. 

The view from Bengaluru is a study in contrast. It is put down to a divide between the cities. "It’s almost like you have to be in Delhi to be influential, which means a significant amount of entrepreneurs’ time has to be spent in the National Capital to be influential," the VC investor said.

In 2014, Nasscom carved out the Internet, Mobile and Ecommerce Council (NIMEC), chaired by veteran Sanjeev Bikhchandani, who founded online classifieds company Info Edge, best known for jobs website Naukri.com. 

NIMEC is co-chaired by Kunal Bahl, cofounder of Snapdeal. There are also nine members and two special invitees (chief executives of Yepme and Zomato). Of the nine, five are CEOs of companies headquartered in Delhi (MMT, Paytm, PolicyBazaar, Jaypore and Google India). 

In all, nine of the 13 companies in the council representation are headquartered in Delhi. The rest are Latif Nathani of eBay (Mumbai), Murugavel Janakiraman of Matrimony.com (Chennai) while Bhavish Aggarwal of Ola Cabs and Amit Agarwal of Amazon sit in Bengaluru. If the industry representation is by category, there are four online retail companies and then a spate of aggregators (classifieds, travel, food tech, payments and so on).

But NIMEC is not a true mirror of the representation or influence of Bengalurubased companies, where most of the capital has been infused. Bengaluru as a market too has a record of high volume of users and fast uptake of internet services. 

This reflects in employment generated by Bengaluru companies, notably Flipkart. Bikhchandani countered this, saying the current 11 members do not restrict the agenda.



"All discussions are with the larger set of companies that is directly affected," he said by email. For instance, there have been goods and services tax (GST) discussions with every ecommerce member of Nasscom, including Flipkart. Payment inputs have been taken from Visa, Mastercard and Flipkart, among others who are not council members. 

There have been policy discussions on connectivity with Nasscom members who are not on the council, even emerging but key internet businesses from Bengaluru like UrbanClap (local home services) and Practo (healthcare appointments). 

"This is a diverse industry," said Bikhchandani."Ecommerce spans sectors — transport, travel, retail, pharma or payments — with different needs and focus areas. Even in the same sub-sector, we have had differences (say in etail) but finally, they come together to a common set of recommendations." 

Bikhchandani cited the Department of Industrial Policy and Promotion (DIPP) Press Note 3, which spelt out guidelines for FDI in ecommerce. Similarly, Nasscom inputs went to the recent Ratan Watal Committee to review the digital payments framework. 

"The internet industry has strong internal competition. However, cohesive voices do emerge," said Bikhchandani, adding that both Nasscom and IAMAI are effective industry bodies. Another Nasscom official noted that perceptions vary across generations, with two stark extremes.

"Bikhchandani, now in his 50s, has been through a number of phases, including a job and the early days of the internet. On the other hand, you have very, very young startups–take the other extreme of a Rahul Yadav, who co-founded Housing.com right after IIT and is the bad boy of the startup world," he explained.

There are far lower levels of patience among founders of new age internet companies. IAMAI and Nasscom measure themselves by government action on their policy recommendations — with, say, Telecom Regulatory Association of India (Trai) and DIPP — not high-decibel statements to the media.

"We are business associations in the vein of CII, Ficci or Assocham," said Subho Ray, IAMAI president since 2006. "But yes, a think tank is required to focus on the impact of internet . As business associations, we may lack the correct representation when it comes to assessing technology impact." 

A think tank will call on industry players to go beyond their companies and individual interests and drive neutral policy. Lack of such a think tank is showing in how ‘additional factor of authentication,’ an RBI stipulation of payment gateway for internet companies, is applied for local companies and global competitors who have payment gateways outside India. 

"The reality in aspects like two-factor authentication, which is a massive issue in digital payments, is that companies are actually disadvantaged," MMT’s Kalra told ET, calling for a level playing field.

The software product industry has a think tank in iSPIRT, run by Sharad Sharma. The digital industry is still looking for that voice, even as public sector behemoths like State Bank of India challenge Paytm’s credentials because it is seen to be less of an Indian company owing to its Chinese investors. 

As OTAs have discovered, in a diverse and even divided field, it takes a government hand to push the internet industry toward unity.

Author: Kunal Talgeri
Source: http://economictimes.indiatimes.com/tech/internet/can-internet-industry-based-in-metros-and-diversified-across-sectors-find-a-cohesive-voice/articleshow/56302210.cms

It's getting harder every year to get a decent cheap airfare. The prices go up, inflation increases and the EGP-to-USD rate is getting worse. So knowing how to fly for cheap is no longer a luxury, but very important basic knowledge. 

Luckily there are many tricks that will help keep the prices down.

1 — Hidden flights trick

Without getting technical, sometimes it's cheaper to reach you destination (let’s say Ethiopia) by booking a ticket to somewhere else (South Africa, for example) with a stop at Addis-Ababa. This is sometimes called the "hidden-flight" trick.

Why did i use Ethiopia? Because this trick works mainly inside Africa and inside the US.

To find such tickets use this cool website: skiplagged.com

And don't worry, it's legal. But make sure to buy a one-way ticket, and have only a carry-on bag so you can skip the second leg of your flight safely.

2 — Be flexible

If there was just one trick to get the best airfare, this would be it. Being flexible at least +/- 3 days of your intended departure time will almost always help you get a cheaper ticket.

Make sure to not only search for one date. Make sure to check around your dates for better fares. Use Meta engines like skyscanner.com to search around.

3 — Go budget airlines

Did you try traveling from Hurghada or Sharm el Sheikh? Some budget airlines, such as Easyjet, Nile Air and Air Berlin, fly from these airports. And their prices are insanely cheap!

Last year, I took a flight from Hurghada to Milano, paying only EGP 300! (This would be around EGP 600 now.) But to be able to use such airlines you have to pay attention to the next trick.

4 — Drop the luggage

One of the best ways to fly cheaper is to go with a backpack or a carry-on luggage only. Budget airlines give you very cheap fares if you go without bags. Also, EgyptAir recently introduced a new cheaper-than-economy fare for those who bring no luggage for the hold! So start learning how to pack efficiently and drop the luggage!

5 — Take a red-eye flight

Nobody loves traveling late at night or very early in the morning. That’s why these flights are always cheaper. So get ready to eat the cake that nobody wants and aim for a red-eye flight for better fares. You can always sleep on the plane, right?

6 — Land in a different city

I know you aim to go to Marseille, but did you consider landing in Paris and taking a bus or a train? Sometimes leaving from or landing at a different city in the country you aim to visit might be much cheaper.

Next time you consult skyscanner.com for a trip, don't enter your city, enter the country — and wish for some luck.

7 — Land in a different country

Extreme? Perhaps. But this could seriously save you some bucks. Travelling direct to Belgium, for example, is always expensive. So more than once I took a cheap budget flight to Dusseldorf, then a 20-euro train ride to Brussels. It saved me a hell of a lot of money!

8 — Go incognito

Always search flights in incognito mode (on Chrome, press CTRL+SHIFT+N and you will be there). It's the mode where the airlines and search engines can't put cookies to track your activity. Because if they think you have visited them before, they will assume you will more than likely book this time, so they will bump the price up a little. Don't fall for that.

9 — Use debit cards

When paying for your flight online, there is always some kind of fee to pay if you are using a credit card. But most of the time debit cards don't incur fees. So do that or use cash if possible.

10 — Try two one-way flights

Most people search for a return flight when travelling. And it's usually cheaper. But not all the time. Try searching for every leg individually; sometimes you might get a better fare. This is especially true if you are willing to travel out to a different city from the one you fly back from. A huge moneysaver. 

Author : Nour El-din Ebrahim

Source : http://english.ahram.org.eg/NewsContent/8/27/254028/Travel/News/Travellers-tips--quick-tips-to-get-a-cheap-airfare.aspx

Friday, 30 December 2016 10:29

Technology in 2017: gadgetry goes old school

Given the failures of professional pollsters to predict anything of late, I am loath to be your crystal ball for the year’s upcoming tech developments. Those who imagined a revolution last year fuelled by the Apple Watch, heralding the death of the Swiss watch industry, have been proven mightily wrong.

Instead, it’s the little victories that fascinate, and in many ways, have greater relevance. Just as the Apple Watch has not killed off health monitors from Fitbit or Garmin, the new high-definition TV formats won’t necessarily drive those blissfully content with “normal” high-def LCD screens or Blu-ray players to upgrade. Sources in the trade suggest that “4K UHD” (for “Ultra High Definition”) and HDR (or “High Dynamic Range”) are desperate moves by manufacturers to counter the 3D fiasco, as even higher-resolution hardware is being developed for launch a few years hence.

A saturation point is being reached. Fewer people are prepared to swallow the depreciation that accompanies being “early adopters.” Equally, many buyers are overwhelmed by features they neither need nor want – yet they no longer fear being treated like Luddites who are simply afraid or ignorant of technology. Gone are the days when one was mortified by the superior tech knowledge of the average seven-year-old, despite recent TV ad campaigns to the contrary. Simply put, enough is enough.

Smart phones are the exception. A couple of years ago, for example, we reported on the Punkt.Phone, which removed all but the basics for those who only wanted mobile phones for voice and text. I have no idea what the take-up has been, but there is no discernible trend away from do-it-all models, and nothing has slowed down the hyperactivity in the world of smartphones – not even exploding batteries.

Phone junkies are lining up for Google’s first effort, the Pixel, which could be a game-changer. Samsung and Apple clearly have their work cut out for them, the latter having antagonised some customers by ditching the standard headphone socket. The latest craze is improving in-phone cameras, with specialist camera makers collaborating and co-branding with phone manufacturers.

No less than Leica and Kodak, two of the most important names in the history of photography, have appended their logos to new smartphones. Kodak’s Ektra – reviving a name from the past – even looks like an old-fashioned rangefinder when nestling in its case. It offers DSLR functionality, has a 21 megapixel main camera, a front-facing 13 megapixel camera, 4K video capture and a host of features you expect of a camera but not a phone. 

 

Porsche Design Huawei Mate 9 Phone

Porsche Design Huawei Mate 9 Phone

 

Leica has teamed up with no less than Porsche Design to create the a limited-edition, all-black version of the Huawei Mate 9, a price tag of £1,200, available exclusively from Harrods. This beauty – which I’ll cover in depth next month after using it “for real” – features a Leica Summarit lens, 20 megapixel monochrome (did you hear that, retro-snappers?) and a 12 megapixel RGB Dual Camera. Both this and the Kodak boast all of the latest phone specs, the former with dual SIM capacity for example, so you aren’t trading off smartphone performance for imaging. 

Porsche Design Huawei Mate 9 PhonePorsche Design Huawei Mate 9 Phone

That said, the standalone camera is not dead yet. Aside from selfies, smartphones still do not handle as well as made-for-the-purpose cameras, whether SLRs, compacts or rangefinders, and accessing the various functions is still fiddly compared to the physical buttons or knobs on a “proper” camera. The truism remains that the most important element is still the photographer: David Bailey with an iPhone is still going to massacre some nebbish with a Nikon.

Following the success of Olympus’ Pen F and the lust created by Hasselblad’s X1D, next year will see a host of new, sophisticated models to keep serious photographers from abandoning cameras for phones. Most new cameras already feature wireless connection to computers, tablets or phones, for easy transfer of images, GPS to add metadata, high-def video and other niceties. Amusingly, the hottest new cameras, especially the Pen-F and the latest Fujis, boast 1950s rangefinder styling.

Headphones continue their inexorable rise at the expense of loudspeakers – clearly this is analogous to what smartphones are doing to cameras. In sacrificing quality in both instances, we are losing performance for convenience, but the high-end is fighting back.

2017, in part because of iPhone 7, will see an increase in the number of Bluetooth models at all price points and quality levels. For those (like me) who prefer a physical cable, existing models with detachable cables can be converted for the iPhone 7 (which comes with an adaptor, by the way) with new cables terminated in a Lightning plug. 

 

MoFi One-Step LPs

MoFi One-Step LPs

 

Far be it for me to suggest that there is global backlash against the tech onslaught in general, but the vinyl LP has had another bumper year, and, surprisingly, it has done so at the expense of downloads. An indication of its return to greatness is not the plethora of cheapo plastic record players, but one significant event: super-hip manufacturer Shinola, which revitalised watchmaking in the USA, has launched a serious turntable made in conjunction with high-end brand, VPI. 

 

Shinola turntable

Shinola turntable

 

Called the Runwell, and costing US $2500, it is easy to use, beautifully-made and utterly gorgeous. As with many LP buyers don’t play them but display them as objets d’art, the Runwell could easily find an audience buying it for its looks alone. That would be a waste, however, because we all know that vinyl sounds best. So my predictions for next year? Back to the future.And on that note, have a suitably luxurious New Year.

Author: Ken Kessler
Source: http://www.telegraph.co.uk/luxury/technology/technology-2017-gadgetry-goes-old-school

This year will be seen as a watershed moment for mobile, with nearly every change reflecting mobile's now-dominant contribution to search.

Aloha, here we are again — coming down from the high of holiday e-commerce, the Q4 scurry of lead gen and the calm before year-end reporting starts churning. Let’s take a breather and look back on all of the changes in PPC that came flying at us in 2016.

First, let’s get one thing out of the way. In last year’s year-end roundup, I said Yahoo might be worth paying attention to in 2016 due to the renegotiated search deal with Microsoft and CEO Marissa Mayer’s stated commitment to mobile search. So long ago. It did seem like Yahoo might just be able to gain steam back then. Now, that steam is gone like the data of over a billion user accounts.

The final adieu to the Yahoo Bing Network came in February, and for many advertisers, that was the last time Yahoo entered their campaign heads. Sure, if you advertise with Bing and/or Google, your ads typically show up on Yahoo, too, but other than the water cooler talk about who was going to buy Yahoo that developed into, “Will Verizon still buy it and at what discount,” Yahoo held little relevance for search marketers in 2016.

So, moving on to all the stuff that made 2016 a giant year in PPC! We’ll start with the biggies that impact most everyone and move to more specialized updates.

Major, major changes this year

There are always changes in paid search, but 2016 were not your garden variety year. There were fundamental updates that will continue to have repercussions in the years ahead. The was a lot less frustration, however, in 2016 than in the last year of major changes — when Google unleashed Enhanced Campaigns in 2013. 2016 can be seen as the year mobile truly took hold as the primary focal point in paid search, with some reports showing mobile now accounts for 60 percent of searches in the US. Desktop results were changed to reflect mobile. That mobile-preferred check box for ads went away, and mobile bids can now be used as a foundation for campaign bidding. Enhanced campaigns did its job.

Google upped its PR finesse in 2016. It announced advertisers would have to rewrite all of their ads at the same time that it announced device bidding is coming back. Desktop and tablet were re-separated for bidding, and it’s now possible to have mobile be the base bid. Maybe you’re not even doing anything differently yet, but knowing you can set a tablet bid adjustment or make a campaign mobile-first whenever you want feels so empowering, right? Well played, Google. Bing, which was never as restrictive with device bidding as Google to begin with, is currently piloting new bid adjustment ranges, but still makes desktop the base bid.

We first reported the biggest change of the year — Expanded Text Ads — was being tested in April. ETAs went live for everyone at the end of July. The new ad format upends how advertisers have written text ads since the inception of AdWords, more than 15 years ago. The transition hasn’t been without its bumps — Google pushed the cutoff for being able to edit and add standard text ads until January 31, 2017, after seeing

slower-than-expected

adoption of the new longer text ad format. There was the headline truncation kerfuffle, which mostly seems to have been remedied with a narrower font, but for the most part, advertisers have taken the changes in stride, on the promise of better CTRs.

2017 will be the year we really see how ETAs perform. Early results have been mixed, with some advertisers seeing dramatic bumps in click-through rates and others seeing, well, meh. Bing added support for ETAs as well, and rolled them out globally in October for much-welcomed parity between the two platforms in this area.

The ushering in of ETAs was made possible, of course, by the removal of text ads in the right rail on desktop, which also made desktop echo the layout of mobile results. It was quickly pointed out that longer titles and description copy in ETAs also have a way of making text ads look even more like organic listings. And speaking of making ads blend in with their organic surroundings, let’s not forget 2016 was the year of the green ad label. Green replaced the yellow in the ad labels next to the display URLs in text ads, which also happen to be green like their organic counterparts. (Want to see how Google’s color treatment of text ads has changed over the years? Here it is.)

Now for two announcements that generated a ton of interest but essentially had zero impact this year. First, the Google AdWords redesign. Some advertisers do have alpha access, but there are still a lot of elements missing before the new look is ready for prime time. Still, that didn’t tamper interest in some of the very handy visualizations in the new design. We’ll have to wait until 2017 to get the full Material Design treatment that Google Merchant Center and AdSense got this year. Second, Microsoft is buying LinkedIn. The deal hasn’t closed yet, but Microsoft’s Lynne Kjolso told the audience at SMX Advance this year that discussions of advertising scenarios were already happening shortly after the announcement.

Shopping & retail

With Amazon being Amazon, and Facebook’s Dynamic Product Ads, and even Pinterest’s Promoted Pins, gaining adoption, Google is under pressure to squeeze everything it can from product search and its product listing ads. And squeeze it did this year. Carousels of product listing ads (PLAs) are now showing up in Google Image SearchYouTube and third-party retailer sites.

Google also opened Shopping campaigns up to Customer Match, allowing advertisers to retarget customers with product listing ads with bids tailored to those audiences or excluding those audiences from Shopping campaigns.

Google started looking at ways to get more from all those broad product searches, like “cocktail attire,” this year. The most innovative but perhaps least likely to succeed of these is the shop the look format for apparel and home products that pull images from partners such as Curalate and Polyvore (owned by Yahoo, so there you go) and link to a set of product ads based on the looks. The other broad query PLA format called Showcase Ads initially showed off retailer collections. But one recent variation on this featured new and used clothing on outlet-related searches.

Oh, and Purchases on Google — aka the buy button-like feature that lets consumers shop from a PLA on their phones — is ticking along in pilot mode. Ralph Lauren, Ugg and Staples are among the brands that continued to test it this year.

Google took a big step in standardizing product data in Google Shopping by requiring GTINs for brand-name products that are sold by multiple retailers in product feeds.

Another big change for sellers was Google’s announcement that retailers and brands must have at least 150 ratings in the past 12 months for seller ratings to appear in their ads. That was up from just 30.

Also, for manufacturers, it’s worth pointing out that Manufacturer Center is still alive. Introduced last year, but flying far under the radar, Google’s Manufacturer Center is where brands and original manufacturers can provide a primary source for their product data used in Google Shopping. Manufacturers that use it can get some pretty nifty insights into how their products perform across Google in the analytics dashboard, such as clicks made on their products versus competing products. This year, Google reduced the amount of data it’s requesting in Manufacturer Center, apparently because most weren’t providing complete information anyway.

Local and Maps

Local got a shake up this year with the introduction of ads in the local pack, Promoted Places pins in Maps, exposure for local inventory ads in Maps and Knowledge panels, developments in store visits metrics, and pulling Google Maps out of the Search Partners network.

Ads started showing up in the Local Finder, the listings that appear next to the Map after a user clicks on “More places” from the search results, in April, around the same time Maps was moved into general search ad inventory. Later that month, Google started testing a purple “Ad” label on Local Finder ads and a corresponding purple pin on the map on Android and desktop. That didn’t last in the local finder, but the purple labels and pins did roll out in Google Maps.

And the big development in Maps, Promoted Places, has been in testing for a good part of the year. Retailers such as Walgreens, MAC Cosmetics and Starbucks have been testing the ads on Android that feature the brand logo in the pin and can include promotions.

Though still limited to a handful of metro markets in California, another area to keep an eye on in the local space is Google’s Home Services Ads program. This year, HSA opened up to HVAC services and electricians, and the whole program finally rolled out to mobile.

Google’s efforts to connect online campaigns with offline impact continued in 2016. Its store transactions measurement is still in beta, and there weren’t really any announcements around that this year, but Store Visits continued to gain traction in AdWords. Google announced it had measured more than one billion store visits from AdWords in 11 countries as of May (it’s now available in 14 countries). Store Visits also expanded to Display Network campaigns. Finally, Store Visits data became available in distance and location reports in Adwords. (The distance report is an unsung resource for advertisers with physical locations).

Audience targeting

Google has been steadily shifting from a focus on intent targeting to audience + intent targeting, thanks to market pressure from social networks Facebook. 2015’s Customer Match was the first big step in this area.

Big news in audience targeting is demographic targeting — age and gender — rolling out, and the ability to target similar audiences in search coming out in beta.

This fall, Google announced it would at last start to support cross-device retargeting. Google’s head of search ads, Jerry Dischler, made several announcements on audience targeting for search at SMX East in October: Cross-device retargeting was extended to Retargeting Lists for Search ads (RSLA), demographic targeting for age and gender in search ads was rolling out of beta, and similar audiences for search is now in open beta. These all add up to big possibilities for refining the way we execute search campaigs in 2017 and beyond.

Analytics & reporting

This year, Google unveiled the Analytics 360 Suite in May. The a la carte premium suite includes the rebranded versions of Google Analytics Premium, tag manager and Adometry attribution tools, as well as a new data management platform, a testing and optimization tool and a reporting and data visualization service. The nice thing is, the freeloaders got gifts, too. A free version of the reporting and visualization platform, Google Data Studio, rolled out early this summer. This fall, a free version of Google Optimize for landing page testing and optimization went into beta (sign up here).

Ad extensions

A quick rundown of what happened in extension land this year:

  • Bing launched a Social Extensions test in March that seems to have faded away.
  • Sitelinks started showing up in swipeable carousels. The new Price extensions started off as a list and then shifted to swipeable carousels.
  • Affiliate extensions didn’t get much fanfare when they rolled out, but I’m hoping to see some case studies on how these are working for manufacturers in 2017.
  • Message extensions came out of beta. There is a lot of promise in this extension, and it will be interesting to see the kind of support Message extensions receive next year.
  • Visual Sitelinks test started running in late fall. On mobile, each sitelinks displays with an image in a swipeable carousel card. (No, it’s not just you, the swipeable card carousel showed up all over the place this year.)  I’m not so sure about these, but we’ll see.
  • The Promotions extension beta launched ahead of Black Friday. From what I’ve heard so far, this also holds lots of promise.

Honorable Mentions, in no order particular order

Google added native inventory to the Display Network and introduced a responsive ad format to fill it. The responsive ads can run across the GDN, including in the newly available native ad inventory. Advertisers can convert text ads to responsive ads in Editor now. It looks like more may be in store for responsive ads soon.

Conversions became the king of measurement in AdWords, as Converted Clicks went off to into the sunset this fall.

Salesforce users can now import their lead data right into AdWords.

A whole bunch of weird stuff happened in AdWords Keyword Planner, presumably thanks to bots. And Google added forecasting and trend data for those with active AdWords campaigns.

Google banned payday loan adskinda sorta.

Here’s something I was excited about when it was first announced, but have yet to do anything with and am jealous of those who have: AdWords Campaign Groups.

Google started shutting down its Compare products in the US and UK early in the year — a big deal to the industries affected (credit cards, auto insurance, mortgages and travel insurance).

Google updated automated bidding in AdWords and introduced Portfolio bid strategies to make it possible to set distinct CPA targets at the ad group level.

In the US, those giant car ads, Model Automotive Ads (just rolls off the tongue), came out of beta on mobile, along with nearby dealer ads.

Christmas came early for Mac users with the release of Bing Ads Editor in June.

And that’s a wrap on 2016. Expect to see the trends we say this year — audiences; attribution, including online-to-offline; mobile; and automation — continuing to influence change in the year ahead.

Author: Ginny Marvin
Source: http://searchengineland.com/2016-paid-search-biggest-changes-266326

Saturday, 24 December 2016 00:00

Apple launches redesigned iCloud photos app

CALIFORNIA – After launching a new update to the Photos web app on the iCloud beta website earlier this month, Apple has now rolled out the update to all users (via Mac Generation).

The overhaul to the app on iCloud.com introduces a macOS-like Photos experience with a sidebar that can be toggled on and off, and a scrollable thumbnail view of every photo in an album at the bottom of the site when looking at individual pictures.

Also of note is a new horizontal scrubber to scroll between pictures taken before and after the current photo you’re viewing. Like some of the other new interface elements, this change makes the new app line up more closely with the experience in the current native Mac Photos app.

Author : Mahmood Idrees

Source : https://en.dailypakistan.com.pk/technology/apple-launches-redesigned-icloud-photos-app/

Friday, 23 December 2016 14:45

Big Data Industry Predictions for 2017

Wow! What a year 2016 has been. The big data industry has significant inertia moving into 2017. In order to give our valued readers a pulse on important new trends leading into next year, we here at insideBIGDATA heard from all our friends across the vendor ecosystem to get their insights, reflections and predictions for what may be coming. We were very encouraged to hear such exciting perspectives. Even if only half actually come true, Big Data in the next year is destined to be quite an exciting ride. Enjoy!

IT becomes the data hero. It’s finally IT’s time to break the cycle and evolve from producer to enabler. IT is at the helm of the transformation to self-service analytics at scale. IT is providing the flexibility and agility the business needs to innovate all while balancing governance, data security, and compliance. And by empowering the organization to make data-driven decisions at the speed of business, IT will emerge as the data hero who helps shape the future of the business. – Francois Ajenstat, Chief Product Officer at Tableau

In 2017, we’re going to see analytics do more than ever to drive customer satisfaction. As the world of big data exploded, business leaders had a false comfort in having these mammoth data lakes which brought no value on their own when they were sitting unanalyzed. Plain and simple, data tells us about our customers — it’s how we learn more about customers and how to better serve them. As today’s customers expect a personalized experience when interacting with a business, we’re going to see customer analytics become the spinal cord of the customer journey, creating touch points at every level of the funnel and at every moment of interaction. – Ketan Karkhanis, SVP and GM of the Salesforce Analytics Cloud

Knowing the Unknown Unknowns – Enterprises that apply Big Data analytics across their entire organizations, versus those that simply implement point solutions to solve one specific challenge, will benefit greatly by uncovering business or market anomalies or other risks that they never knew existed. For example, an airline using Big Data to improve customer satisfaction might uncover hiccups in its new aircraft maintenance scheduling that could impact equipment availability. Or, a mobile carrier looking to grow its customer base might discover ways to improve call center efficiency. Discovering these unknown unknowns can enable organizations to make changes or fix issues before they become a problem, and empower them to make more strategic business decisions and retain competitive agility. – Laks Srinivasan, Co-COO, Opera Solutions

Democratization of Data Analysis – In 2017 I believe that C-suite executives will begin to understand that there is a real gap between their data visions and the ability of their enterprise to move data horizontally throughout the organization. In the past, big data analysis has lagged in implementation compared to other parts of the business being transformed by advanced technology such as supply chains. I believe companies will begin to place different data storage systems into the hands of end users in a fast and efficient manner that has user self-direction and flexibility, democratizing data analysis. –  Chuck Pieper, CEO, Cambridge Semantics

The battleground for data-enriched CRM will only continue to heat up in 2017. Data is a great way to extend the value proposition of CRM to businesses of all sizes, especially those in the small-to mid-size range. By providing pre-populated data sets, the amount of “busy work” done by sales and other CRM users is reduced, and the better the data, the more effective individuals can be every moment of the day. A lot of M&A as well as in-house development and partnerships will fuel more data-powered CRM announcements in 2017. The key, of course, is seeing which providers provide the most seamless and most sensible use cases out of the box for their customers.” – Martin Schneider, Vice President of Corporate Communications, SugarCRM

In 2017 (and 2018), streaming analytics will become a default enterprise capability, and we’re going to see widespread enterprise adoption and implementation of this technology as the next big step to help companies gain a competitive advantage from their data. The rate of adoption will be a hockey stick model and ultimately take half the time it has taken Hadoop to rise as the default big data platform over the past six years. Streaming analytics will enable the real-time enterprise, serving as a transformational workload over their data platforms that will effectively move enterprises from analyzing data in batch-mode once or twice a day to the order of seconds to gain real-time insights and taking opportunistic actions. Overall, enterprises leveraging the power of real-time streaming analytics will become more sensitive, agile and gain a better understanding of their customers’ needs and habits to provide an overall better experience. In terms of the technology stack to achieve this, there will be an acceleration in the rise and spread of the usage of open source streaming engines, such as Spark Streaming and Flink, in tight integration with the enterprise Hadoop data lake, and that will increase the demand for tools and easier approaches to leverage open source in the enterprise. – Anand Venugopal, Head of Product, StreamAnalytix, Impetus Technologies

The unique value creation for businesses comes not just from processing and understanding transactions as they happen and then applying models, but by actually doing it before the consumer, or the sensor, logs in to do something. I predict we will quickly move from post-event and even real-time to preemptive analytics that can drive transactions instead of just modifying or optimizing them. This will have a transformative impact on the ability of a data-centric business to identify new revenue streams, save costs and improve their customer intimacy. – Scott Gnau, Chief Technology Officer, Hortonworks

Text analytics will be subsumed by ML/AI in 2017. The terms Text Mining and Text Analytics never really gained the kind of cachet and power in the marketplace that most of us hoped they would. This year will see the terms be subsumed by ML/AI and they’ll become component pieces of AI. – Jeff Catlin, CEO, Lexalytics

IT will start automating the choices for data management and analysis, leading to standardized data prep, quality, and governance. BI tools have been making more decisions for people and automating more processes. The knowledge for doing this — e.g., choosing one chart type over another — was embedded into the tools themselves. Data prep and management tends to be different, because the required rules are specific to the business requirements rather than being inherent in the data. Rule-based data management will enable IT to define rules that the business uses in its analytics processes, making business analysts more productive while still ensuring reliability and reproducibility. For a use case, consider a data scientist who sources data externally, and lets the data tools automatically choose which enterprise data prep and cleansing processes need to be applied. – Jake Freivald, Vice President, Information Builders

Managing the sprawl: Self-service analytics technologies have put analysis into the hands of more users and as a byproduct, led to the creation of derivative artifacts: additional datasets and reports, think Tableau workbooks and Excel spreadsheets. These artifacts have taken on a life of their own. In 2017, we will see a set of technologies begin to emerge to help organize these self-service data sets and manage data sprawl. These technologies will combine automation and encourage organic understanding, guided by well thought-out, but broadly applicable policies. – Venky Ganti, CTO, Alation

We will move from “only visual analysis” to include the whole supply chain of data. We will eventually see visualizations in unified hubs that show us more data, including asset management, catalogs, and portals, as well as visual self-service data preparation. Further, visualizations will become a more common means of communicating insights. The result of this is that more users will have a deeper understanding of the data supply chain, and the use of visual analysis will increase. – Dan Sommer, Senior Director and Market Intelligence Lead, Qlik

Artificial Intelligence

AI, ML, and NLP innovations have really exploded this past year but despite a lot of hype, most of the tangible applications are still based on specialized AI and not general AI. We will continue to see new use-cases of such specialized AI across verticals and key business processes. These use-cases would primarily be focused on the evolutionary process improvement side of the digital transformation. Since the efficiency of ML is based on constant improvement through better and wider training data, this would only add to the already expanding size of the data enterprise needs to manage. Good data management policies would be key to achieving a scalable and sustainable AI vision. For the business users this would mean better access to actionable intelligence, and elimination of routine tasks that can be delegated to the bots. For users who want to stay relevant in the new economy, this would allow them transform their roles in to knowledge workers that focus on tasks that can still only be done based on the general intelligence. Business users that can train the AI models would also be very hot commodity in the economy of future. – Vishal Awasthi, Chief Technology Officer, Dolphin Enterprise Solutions Corporation

Why machine-led, human-augmented intelligence is the next tech revolution – In 2017, more C-suite executives are going to prioritize data-driven business outcomes. As C-level executives see the potential for analytics, they’ve begun to show greater participation in getting analytics off the ground in their organizations, and I expect they’ll be leading the charge this year to ensure insights permeate every level and department of the business. All of the true technological revolutions have happened when people at a mass scale are empowered. So, shifting data science from an ivory tower function to giving everyone in an organization access to advanced, interactive AI will help each employee become smarter and more productive. It’s becoming clearer that when data can inform each and every decision a business user is making, businesses are going to see a real a competitive advantage and business outcome. – Ketan Karkhanis, SVP and GM of the Salesforce Analytics Cloud

Graph-Based Databases for Emerging Tech – The key applications companies are exploring — IoT, machine learning and AI – will be constrained by relational database technology. These areas will move towards sitting on top of graph-based architecture, which by definition, expands much more quickly in response to the output of those learnings. If you think of AI, it cycles back on data many, many times, and once it has a conclusion, it asks for more information. If that information in a relational format is not already there, all those AI, IoT and machine learning programs stop. But if it’s on a graph-based arch it automatically allows itself those multiple levels of joins to bring in more information. That will help unleash the real potential of some of those new technologies. – –  Chuck Pieper, CEO, Cambridge Semantics

The symbiotic relationship between man and machine will enable better decisions. Machines will never replace man, but they will empower and complement the data-driven efforts of workers in the coming years, especially as data becomes more accessible across departments and organizations. The democratization of data, the self-service movement and data’s continued simplicity means more people will be leveraging it in more applications – paving the way for a better man vs. machine relationship. For example, IBM Watson can go through medical papers, research and journals and then present top choices, but only a trained doctor can make the right decision for a specific patient. Adding to that, the reskilling of the workforce through nanodegrees will simplify data even further. Technology is sharpening the workforce and putting the power of data into the hands of business users – AI and machine-learning will only help them achieve more.” – Laura Sellers, VP of Product Management, Alteryx

My prediction about Big Data is that it will be subsumed into the topic of AI, as big data is an enabler of AI not an end in itself. The lack of focus on big data will actually let the field mature with only the serious players and result in much better business results. – Anil Kaul, Co-Founder and CEO of Absolutdata

Companies will stop reinventing the AI wheel. More and more companies are applying artificial intelligence and deep learning into their applications, but a unified, standardized engine to facilitate this process has lagged behind. Today, to insert AI into robots, drones, self-driving cars, and other devices, each company needs to reinvent the wheel. In 2017, we will see the emergence of unified AI engines that will eliminate or greatly mitigate these inefficiencies and propel the formation of a mature AI tech supplier industry.” – Massimiliano Versace, cofounder and CEO, Neurala

AI will (still) be the new black. One topic that was covered ad nauseam in 2016 was AI. While it’s important to be cautious about all of the AI hype (especially when it comes to use cases that sound like science fiction), the reality is that this technology is going to evolve even faster from here on out. It’s just in the past few years that innovative business-to-business companies have started using AI to achieve specific business outcomes. Keynoters at this year’s IBM World of Watson conference highlighted ways in which it is already delivering impressive business value, as well as examples of how it might help a CEO decide whether to buy a competitor, or help a doctor diagnose a patient’s symptoms in just the next three to five years. – Sean Zinsmeister, Senior Director of Product Marketing, Infer

Artificial intelligence (AI) initiatives will continue, but in the vein of commoditisation – AI is garnering interest in the legal sector, but a closer inspection of the tools and apps being made available reveal that they are presently more similar to commoditised legal services in the form of packaged, low cost modules for areas such as wills, contracts, pre-nuptials and non-disclosure agreements for the benefit of consumers. Undoubtedly, AI offers tremendous potential and some large law firms have launched initiatives to leverage the technology. However, there’s a significant amount of work to be done in defining the ethical and legal boundaries for AI, before the technology can truly be utilised for delivering legal services to clients with minimal human involvement. Until then, in 2017 and perhaps for a few more years yet, we will continue to see incremental innovative efforts to leverage the technology, but in the vein of commoditisation – similar to what we have seen in the last 12 months. – Roy Russell, CEO of Ascertus Limited

AI and analytics vendor M&A activity will accelerate — There’s no doubt that there’s a massive land grab for anything AI, machine learning or deep learning. Major players as diverse as Google, Apple, Salesforce and Microsoft to AOL, Twitter and Amazon drove the acquisition trend this year. Due to the short operating history of most of the startups being acquired, these moves are as much about acquiring the limited number of AI experts on the planet as the value of what each company has produced to date. The battle for AI enterprise mindshare has clearly been drawn between IBM Watson, Salesforce Einstein, and Oracle’s Adaptive Intelligent Applications. What’s well understood is that AI needs a consistent foundation of reliable data upon which to operate. With a limited number of startups offering these integrated capabilities, the quest for relevant insights and ultimately recommended actions that can help with predictive and more efficient forecasting and decision-making will lead to even more aggressive M&A activity in 2017. – Ramon Chen, CMO, Reltio

AI and machine learning are already infiltrating the workforce across a multitude of industries. In fact, when it comes to HR and people management, more and more companies are starting to deploy technologies that bring transparency to data around the work employees do. This is creating huge opportunities for businesses to leverage frequent touch points, check-ins and opportunities to provide feedback to employees and get a holistic picture of what’s driving work. In 2017 we can expect to see data and analytics used more in HR and management to help visualize behaviors of employees, from the time they were hired to their success down the road, and understand why they have been so successful. By using machine learning companies can focus on building teams to support long-term goal achievement, instead of frantically hiring to fill immediate needs. – Kris Duggan, CEO of BetterWorks

Artificial intelligence (AI) is rapidly becoming more accessible. Previously, you needed a lot of training to implement AI, but this is becoming less and less true as technology becomes more intelligent. Over the next several years, we can expect AI to become more of a commodity and companies like Google and Microsoft will make it extremely easy for developers to analyze large amounts of data on their platform. Once that data analysis is done, developers will be able to implement processes based on those results, which is essentially AI. In the next year we can expect that AI will become much easier to implement for developers via API calls into their applications. – Kurt Collins, Director of Technology Evangelism & Partnerships, Built.io

This year we saw customer interactions evolve from traditional question and answer dialogues, to intelligent machines now enhancing the process and experience. Machines are learning patterns and providing answers to customers to help eliminate some of the mundane tasks that customer service agents used to handle; and intelligent machine personas like the Alexa in the Amazon Echo and Siri in various Apple devices, are paving the way. In 2017, we’ll see more capabilities when it comes to artificial intelligence and customer service like Alexa triggering a call from contact center based on a question about online order status, thermostats submitting a trouble ticket after noticing a problem with the heater, or Siri searching through a cable company’s FAQ to answer to a commonly asked question about internet service troubleshooting. However, one thing will always remain true – human interactions will still be critical when dealing with complex situations or to provide the empathy that is needed in customer service. – Mayur Anadkat, VP of Product Marketing, Five9

For some, the mere mention of artificial intelligence (AI) corresponds to a fashion return from decades ago. So yes, those wide ties are back, and in 2017 we’ll see the rapid adoption of AI in the form of relatively straightforward algorithms deployed on large data sets to address repetitive automated tasks. First a brief history of AI. In the 1960s, Ray Solomonoff laid the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. In 1980 the First National Conference of the American Association for Artificial Intelligence (AAAI) was held at Stanford and marked the application of theories in software. AI is now back in mainstream discussions and the umbrella buzzword for machine intelligence, machine learning, neural networks, and cognitive computing. Why is AI a rejuvenated trend? The three V’s come to mind: Velocity, Variety and Volume. Platforms that can process the three V’s with modern and traditional processing models that scale horizontally providing 10-20X cost efficiency over traditional platforms. Google has documented how simple algorithms executed frequently against large datasets yield better results than other approaches using smaller sets. We’ll see the highest value from applying AI to high volume repetitive tasks where consistency is more effective than gaining human intuitive oversight at the expense of human error and cost. – John Schroeder, Chairman and Founder, MapR

The Cognitive Era of computing will make it possible to converge artificial intelligence, business intelligence, machine learning and real-time analytics in various ways that will make real-time intelligence a reality. Such “speed of thought” analyses would not be possible were it not for the unprecedented performance afforded by hardware acceleration of in-memory data stores. By delivering extraordinary performance without the need to define a schema or index in advance, GPU acceleration provides the ability to perform exploratory analytics that will be required for cognitive computing. – Eric Mizell, Vice President, Global Solutions Engineering, Kinetica

We expect three of the well-funded ML/AI companies to go out of business, while a number of the lesser funded companies will not get off the ground. In addition, we’ll lose more than a few pure-play text analytics companies as ML/AI subsumes more and more of the functionality. The influx of cash isn’t infinite, and companies will need to learn the importance of ROI/TCO analysis. Do you really need a slide or firepole between floors? No. Do you need to have budget for things like, say, salary and advertising, yes. Another common failure will be over-investing in the engineering aspect of the business. While it’s critical to have a great product, people also need to hear about it. If you can’t clearly articulate your business necessity, then it doesn’t matter how cool the product is. – Jeff Catlin, CEO, Lexalytics

Deep Learning will move out of the hype zone and into reality. Deep learning is getting massive buzz recently. Unfortunately, many people are once again making the mistake of thinking that deep learning is a magic, cure-all bullet for all things analytics. The fact is that deep learning is amazingly powerful for some areas such as image recognition. However, that doesn’t mean it can apply everywhere. While deep learning will be in place at a large number of companies in the coming year, the market will start to recognize where it really makes sense and where it does not. By better defining where deep learning plays, it will increase focus on the right areas and speed the delivery of value. – Bill Franks, Chief Analytics Officer, Teradata

By the end of 2017, the idea of deep learning will have matured and true use cases will emerge. For example, Google uses it to look at faces and then determine if the face is happy, sad, etc. There are also existing use cases in which the police is using it to compare the “baseline” facial structure to “real time” facial expressions to determine intoxication, duress or other potentially adverse activities. – Joanna Schloss, Director of Product Marketing, Datameer

The future of all enterprise processes will be driven by Artificial Intelligence, which requires the highest quality of data to be successful. AI is where all business processes are headed; however, with the recent push of AI technology advancements for businesses – many companies have not addressed how they will ensure that the data their AI models are built on is high quality. Data quality is key to pulling accurate insights and actions and in 2017, we will see more companies focus on solving the challenge of maintaining accurate, valuable data, so that AI technology lives up to its promise of driving change and improvement for businesses. – Darian Shirazi, CEO and Co-Founder, Radius

Prediction: Artificial Intelligence will Create New Marketing Categories, Like the B2B Business Concierge. In 2017, AI will allow marketers to create highly personalized ads tailored to buyer’s specific interests in real-time through superior and infinite knowledge. AI will also make mass email marketing tools obsolete (and the resulting spam email), automatically scanning out the “bad” leads and creating custom, personalized communication instead. As AI continues to advance, we can expect to see the recommendation engines that power companies like Netflix and Amazon develop specifically for the B2B market. This will start to pave the way for a B2B business concierge – a completely automated and customized buyer’s journey throughout the funnel that is driven by AI. – Chris Golec, Founder & CEO, Demandbase

AI-as-a-Service will take off: In 2016 AI was applied to solve known problems. And as we move forward, we will start leveraging AI to gain greater insights into ongoing problems that we didn’t even know existed. Using AI to uncover these “unknown unknowns” will free us to collaborate more and tackle new, interesting and life-changing challenges … AI will amplify humans: We have made enormous leaps forward to build machines capable of understanding and simulating human tasks, even mimicking our thought process. 2017 will be the year of knowledge-based AI, as we develop systems based on knowledge, which learn and retain knowledge of prior tasks, rather than pure automation of tasks we want performed. This will completely disrupt the way we work as human capabilities are amplified by machines that learn, remember and inform … AI will be seen as solving the workforce crisis, not creating it: As the baby boomer generation retires, enterprises are on the brink of losing significant institutional mindshare and knowledge. With the astronomical price tag of losing these workers, enterprises are turning to knowledge management and machine learning to train AI to capture institutional knowledge and act on our behalf. In the coming year and beyond, we will see AI adoption not only come from technological need, but also from the need to capture current employee insights and know-how. – Abdul Razack, SVP & Head of Platforms, Big Data and Analytics, Infosys

How Does AI Fit in an Enterprise? Whatever the industry, we can take better advantage of AI by making our current work tools — apps, medical devices, supply chain systems — much better through machine learning. The key is in the delivery — in other words, the “operationalization” of the analytics. I like to use the analogy of the self-driving car. The best autonomous vehicle systems will surely be able to handle the driving task in typical conditions; there are lots of little decisions to be made, but they are straightforward and easy to make. It’s when conditions become more challenging that the magic happens; the car will not only know when a human should intervene but also will smoothly transfer control to the driver and then back again to the machine. We’re on the cusp of where our everyday work apps and devices shift from repositories to assistants — and we need to start planning for it. Today, employees — or their boss — determine the next set of tasks to focus on. They log into an app, go through a checklist, generate a BI report, etc. In contrast, AI could automatically serve up 50% (or more) of what a specific employee needs to focus on that day, and deliver those tasks via a Slack app or Salesforce Chatter. Success will be found in making AI pervasive across apps and operations and in its ability to affect people’s work behavior to achieve larger business objectives. – Dan Udoutch, CEO, Alpine Data

Many Fortune 500 brands are already using chatbots, and many more are developing them as we speak. What’s ahead for the industry? Though it may not seem sexy, the next year will be a foundational one when it comes to applying AI. Chatbots are only as valuable as the relationships they build and the scenarios they can support, so their level of sophistication will make or break them. Investing in AI is only one piece of the puzzle, and 2017 will be the year that companies need to expand their AI initiatives while also doubling down on investing to improve them with new data streams and integration across channels. – Dave O’Flanagan, CEO, Boxever

The AI Hypecycle and Trough of Disillusionment, 2017: IDC predicts that by 2018, 75 percent of enterprise and ISV development will include cognitive/AI or machine learning functionality in at least one application. While dazzling POCs will continue to capture our imaginations, companies will quickly realize that AI is a lot harder than it appears at first blush and a more measured, long-term approach to AI is needed. AI is only as intelligent as the data behind it, and we are not yet at a point where enough organizations can harvest their data well enough to fulfill their AI dreams. – Ashley Stirrup, CMO, Talend

Hybrid Deep Learning systems. In 2017 we’ll see the rise of embedded analytics, optimized by cloud-based learning. The hybrid architectures used by autonomous vehicles – systems embedded within the vehicle to make numerous decisions per second, augmented by cloud-based learning platforms capable of optimizing decisions across the fleet – will serve as the foundation for the next generation of IoT machines. – Snehal Antani, CTO, Splunk

The focus will shift from “advanced analytics” to “advancing analytics.” Advanced analytics will continue to grow, and eventually be brought into self-service tools. With more users advancing their analytics, Artificial Intelligence (AI) might play a bigger role in organizations. But that means AI will also need to have high levels of usability as well, since users will need it to augment their analyses and business decisions. – Dan Sommer, Senior Director and Market Intelligence Lead, Qlik

Big Data

Many companies have ideas and initiatives around big data, but not a solid understanding of how it, along with the subsequent insights, will help them better the business or develop new solutions. Technology suddenly gave organizations the ability to process large amounts of data at a high frequency. That together with the move to mobile (as every consumer has one or more devices that they are constantly online with) drives a lot of data – whether through social networks, search engines or more. You have the information but it needs to be taken one step further – you need to analyze it. The question for big data is “what can I learn from it? Where can I make meaningful insights? – Dr. Werner Hopf, CEO and Archiving Principal, Dolphin Enterprise Solutions Corporation

Big data becomes fast and approachable. Sure, you can perform machine learning and conduct sentiment analysis on Hadoop, but the first question people often ask is: “How fast is the interactive SQL?” SQL, after all, is the conduit to business users who want to use Hadoop data for faster, more repeatable KPI dashboards as well as exploratory analysis. In 2017, options will expand to speed up Hadoop. This shift has already started, as evidenced by the adoption of faster databases like Exasol and MemSQL, Hadoop-based stores like Kudu, and technologies that enable faster queries. – Dan Kogan, director of product marketing at Tableau

Big Data, More Data, Fragmented Data – As we amass more enterprise data and blend third-party data, we create greater opportunity for insight and impact. However, let’s be honest. All companies are not created equal when it comes to their Big Data learning curves and sophistication. We will continue to see companies investing in, yet struggling with building their data layers.  Opera Solutions expects to see more attention and focused on data flow, data layers, and the emergence of the insights layer. – Georges Smine, VP Project Marketing, Opera Solutions

Moving into SMB – I see the advent of the big data analytics and discovery for SMB to start taking root in 2017. Big, rich, data environments such as pharma, healthcare, life sciences, financial services, insurance are the current industries leading big data analytics but graph-based databases can also be used by small companies, where you don’t want to spend your time coding and recoding every time you change your mind about what it is you want to look for. –  Chuck Pieper, CEO, Cambridge Semantics

Despite the hype and promise of big data and AI, few clear examples exist today where these technologies impact our lives on a daily basis. Serving relevant ads to website visitors and detecting fraud in credit card transactions come to mind. These companies have invested in big data and machine learning for years, which has allowed them to develop solid data architectures. Companies that have lived with NoSQL databases for more than a year know that ignoring data model design and instead leaning too heavily on the flexible, schema-free capabilities of these databases leads to poorly performing applications, difficult maintainability, and ultimately rework. In 2017, I predict the discipline of data modeling will gain strength as a sought-after skill set and project activity, particularly for companies dedicated to building impactful data strategies. Tools, such as well-designed industry clouds provide the professional data model design necessary for long-term success.” – J.J. Jakubik, Chief Architect, Vlocity

The sheer volume of data generated by applications and infrastructure will only increase, resulting in data overload. For the first time, IT Operations teams will embrace an algorithmic approach – also known as Algorithmic IT Operations, or AIOps – to detect signal from noise to ensure successful service delivery. AIOps platforms will provide IT Operations teams with situational awareness and diagnostic capabilities that were not previously possible using manual, non-algorithmic techniques.” – Michael Butt, Senior Product Marketing Manager at BigPanda

We’re living in a big data glut. But in 2017, we’ll see data become more intelligent, more useable, and more relevant than ever. The cloud has opened the doors to more affordable, smart data solutions that make it possible for non-technical users to explore, through visualization tools, the power of predictive analytics. We’re also seeing the increasing democratization of artificial intelligence which is driving more sophisticated consumer insights and decision-making. Forward-thinking organizations need to approach predictive analytics with the future and extensibility in mind. Today’s tools may not be the best for tomorrow’s needs. Cloud solutions are still evolving and haven’t reached functionality maturity yet, but by merging cloud, open source, and agile development methodologies into their predictive analytics stack, organizations will be able to easily adopt as technology advances.  – Slava Koltovich, CEO, EastBanc Technologies

One Team, One Platform – Data is the common thread within the enterprise, regardless of where the source might be. In the past data handlers have relied on disparate systems for data needs. Next year, the goal will be to move data into the future by providing a one-stop shop to access, develop and explore data. Companies will now look to one data platform for integrated cloud services with easy access and consistent behavior that is equipped to satisfy the needs of diverse data-hungry professionals across the organization. Just as you can easily access a variety of apps on your smartphone, business users and data professionals will look to deploy one platform that allows their organization to tap into the rich capabilities of data. – Derek Schoettle, General Manager, Cloud Data Services, IBM Watson and Cloud Platform

Next year will bring about another deluge of data brought on by advancements in the way we capture it. As more hardware and software is instrumented especially for this purpose, such as IoT devices, it will become easier and cheaper to capture data. Organizations will continue to feed on the increased data volume while the big data industry struggles through a shortage of data scientists and the boundaries imposed by non-scalable legacy software that can’t perform analytics at a granular level on big data data. Healthcare will especially be hard hit in this regard. Sources of huge healthcare data sets are becoming more abundant, ranging from macro-level sources like surveys by the World Health Organization, to micro-level sources like next-generation Genomics technologies. Healthcare professionals are leveraging these data to improve the quality and speed of their services. Even traditional technology companies are venturing into this field. For example, Google is ploughing money into its healthcare initiatives like Calico, its “life-expansion” project, and Verily, which is aimed at disease prevention. We expect the demand for innovative technical solutions in all industries, particularly healthcare to explode in popularity next year. – Michael Upchurch, COO, Fuzzy Logix

Data lakes will finally become useful — Many companies who took the data lake plunge in the early days have spent a significant amount of money not only buying into the promise of low cost storage and process, but a plethora of services in order to aggregate and make available significant pools of big data to be correlated and uncovered for better insights. The challenge has been finding skilled data scientists that are able to make sense of the information, while also guaranteeing the reliability of data upon which data is being aligned and correlated to (although noted expert Tom Davenport recently claimed it’s a myth that data scientists are hard to find). Data lakes have also fallen short in providing input into and receiving real-time updates from operational applications. Fortunately, the gap is narrowing between what has traditionally been the discipline and set of technologies known as master data management (MDM), and the world of operational applications, analytical data warehouses and data lakes. With existing big data projects recognizing the need for a reliable data foundation, and new projects being combined into a holistic data management strategy, data lakes may finally fulfill their promise in 2017. – Ramon Chen, CMO, Reltio

I believe customers will choose solutions in Big Data that deliver faster time to value, simple deployment with ease of management, interoperability with open source tools and solutions that help bridge the skills gap. I predict that Big Data technologies like Hadoop will be adopted at an accelerated rate because customers must get smarter about data. Based on customer conversations, they understand they could be disrupted by a new competitor with a data driven business model. Hadoop will be at the core of a data driven business allowing organizations to be more agile, know more about their customers, and offer new services ahead of the competition. I believe the strength of the community, the work of Cloudera and Hortonworks along with maturing ecosystem tools, as well as interoperability with analytical tools, will provide a secure, enterprise ready data platform. – Armando Acosta, Hadoop Product Manager and Data Analytics SME, Dell EMC

Open source and faux-pen source data technology choices will continue to proliferate, but the new model will redistribute rather than purely reduce costs for enterprises. Vendors are walking away from traditional database and data warehouse business models. Prime examples of this are Pivotal open sourcing Greenplum, Hewlett Packard Enterprise (HPE) spinning off Vertica and other assets, and Actian stopping support for Matrix (formerly ParAccel). Open source projects – or in many cases, vendor sponsored faux-pen sources – are becoming the new model for data processing technology. But while open source reduces the costs of vendor licensing, it also shifts responsibility to the enterprise to sort through the options, assemble stacks and productionize open source projects. This increase in complexity and consumption challenges requires new hiring and/or partnering with as-a-Service cloud vendors. – Prat Moghe, Founder and CEO, Cazena

In 2017 organizations will shift from the “build it and they will come” data lake approach to a business-driven data approach. Use case orientation drives the combination of analytics and operations. Approaching a data lake as “Imagine what your business could do if all your data were collected in one centralized, secure, fully-governed place that any department can access anytime, anywhere.” could sound attractive at a high level, but too frequently results in a data swamp that looks like a data warehouse rebuild and can’t address real-time and operational use case requirements. Once in place the concept is to “ask questions”. In reality, the world moves faster today. Today’s world requires analytics and operational capabilities to address customers, process claims and interface to devices in real time at an individual level. For example any ecommerce site must provide individualized recommendations and price checks in real time. Healthcare organizations must process valid claims and block fraudulent claims by combining analytics with operational systems. Media companies are now personalizing content served though set top boxes. Auto manufacturers and ride sharing companies are interoperating at scale with cars and the drivers. Delivering these use cases requires an agile platform that can provide both analytical and operational processing to increase value from additional use cases that span from back office analytics to front office operations. In 2017, organizations will push aggressively beyond an “asking questions” approach and architect to drive initial and long term business value. – John Schroeder, Chairman and Founder, MapR

Big data goes self-service. Organizations that have realized the value of big data now face a new problem: IT and data teams are being flooded with requests from users to pull data. To address this, we’ll see more organizations opt for a self-service data model so that anyone in the company can easily pull data to uncover new insights to make business decisions. A self-service infrastructure allows any employee to easily access and analyze data, saving IT and data teams precious time and resources. To make this a reality, all types of data in every department will need to be published so that users can self-serve. – Ashish Thusoo, CEO, Qubole

2017 will be the year organizations begin to rekindle trust in their data lakes. The “dump it in the data lake” mentality compromises analysis and sows distrust in the data. With so many new and evolving data sources like sensors and connected devices, organizations must be vigilant about the integrity of their data and expect and plan for regular, unanticipated changes to the format of their incoming data. Next year, organizations will begin to change their mindset and look for ways to constantly monitor and sanitize data as it arrives, before it reaches its destination. – Girish Pancha, CEO and Founder, StreamSets

Companies have been collecting data for awhile, so the data lake is well-stocked with fish. But the people who needed data most couldn’t generally find the right fish. I support the notion of a data lake, dumping all your raw data into one data warehouse. But it doesn’t work if you don’t have a way to make it cohesive when you query it. There have been great innovations by companies like Segment, Fivetran and Stitch, which make moving data into the lake easier. Modeling data is the final step that brings it all together and helps some of the best companies in the world see through data.
Companies like Docker, Amazon Prime Now and BuzzFeed are using all their data to create comprehensive views of their customers and of their businesses. When these final two steps are added, the data lake can finally be a powerful way to get all your data into the hands of every decision-maker to make companies more successful. – Lloyd Tabb, Founder, Chairman & CTO, Looker

In 2017, organizations will stop letting data lakes be their proverbial ball and chain. Centralized data stores still have a place in initiatives of the future: How else can you compare current data with historical data to identify trends and patterns? Yet, relying solely on a centralized data strategy will ensure data weighs you down. Rather than a data lake-focused approach, organizations will begin to shift the bulk of their investments to implementing solutions that enable data to be utilized where it’s generated and where business process occur – at the edge. In years to come, this shift will be understood as especially prescient, now that edge analytics and distributed strategies are becoming increasingly important parts of deriving value from data. – Adam Wray, CEO, Basho Technologies

In 2017, the reports of Big Data’s death will be greatly exaggerated, as will the hype around IoT and AI. In reality, all of these disciplines focus on data capture, curation, analysis and modeling. The importance of that suite of activities won’t go away unless all businesses cease operation. – Andrew Brust, Senior Director, Market Strategy and Intelligence, Datameer

Big data or bust in 2017? Big data is an example of something that didn’t get as far along as people predicted. Of course, it wasn’t stagnant. But nearly everyone involved in the enterprise sector would like it to move faster. The problem is that companies struggle, in general, to make sense of big data because of its sheer volume, the speed in which it is collected and the great variety of content it encompasses. Looking ahead, we can expect to see newer tools and procedures that will help companies house and examine these massive amounts of data and help them move toward truly making data-driven decisions. – Bob DeSantis, COO, Conga

In the new world of data, DBMS is really the management of a collection of data systems. This deserves a new thinking or approach to how we manage these systems and the applications that leverage them. The enterprise has long relied on raw logs and systems monitoring solutions to optimize their Big Data applications—and as companies continue to adopt numerous disparate Big Data technologies to help meet their business needs, complexity is only increasing while the time required to diagnose and resolve issues grows exponentially, all of which is underlined by an acute shortage of talent capable of effectively running and maintaining these intricate Big Data systems. The primary challenge faced by the enterprise is finding a single full-stack platform capable of analyzing, optimizing and resolving any issues that exists with Big Data applications and the infrastructure supporting them. In the year ahead, the enterprise will search for a solution that addresses the unmet challenges of data teams that find themselves spending much of their day digging through machine logs in order to identify the root cause of problems on a Big Data stack. These problems, if not eradicated, will continue to reduce application performance and divert teams from their real mission of deriving the full value from their Big Data. Ideal solutions will be ones that resolve problems automatically, detecting and pinpointing performance and reliability issues with Big Data applications running on clusters; solutions that open up the doors to data equality across the enterprise, that with just the click of a button, drastically accelerate the time-to-value of Big Data investments. – Unravel Data

Big data wanes – Big data will continue to wane as a term. The focus now turns from infrastructure to applications with specific purposes. Companies will look to applications and new business models for concrete value, rather than the more general idea that data can be useful at scale. – Satyen Sangani, CEO, Alation

Business Intelligence

Self-service extends to data prep. While self-service data discovery has become the standard, data prep has remained in the realm of IT and data experts. This will change in 2017. Common data-prep tasks like data parsing, JSON and HTML imports, and data wrangling will no longer be delegated to specialists. With new innovations in this transforming space, everyone will be able to tackle these tasks as part of their analytics flow. – Francois Ajenstat, Chief Product Officer at Tableau

Many Big Data systems are lacking simple UI’s for data input and classification. This usually requires highly technical staff and costs for the configuration, ongoing use, and for the interpretation of Big Data. This produces a high cost of entry and ongoing expenses. To add insult to injury, even once deployed, if the tool cannot be completely adopted by all necessary end users due to complexity, all BI efforts may be for naught. Successful User Interfaces (UI’s) are simple and flexible and modify to the needs of a variety of users and any changes to fluid data sets. This is the future of Big Data. Making Big Data even more accessible accurate, and therefore indispensable. Just as other technologies have evolved, BI is evolving to be more accessible than ever to today’s business. This will only continue in the future. – Dave Bethers, Chief Operations Officer, TCN

Digital transformation will be a CIO imperative for greater than 50% of all institutions. As such, IT will no longer be pushing Big Data technologies to the business owners. Instead, IT will need to respond to the demands for faster and more predicative analytics. Data scientists will be embedded into the business units in larger companies and in the smaller firms, everyone will be considered a citizen data scientist. Regardless, business intelligence will no longer be considered a department but an attitude. A way of life. At least for those who plan to be in business by 2019. – Anthony Dina, Director Data Analytics, Dell EMC

In 2017, business people will become ‘data mixologists’, capable of blending data from any combination of systems – centralized and decentralized – to produce new insights on their own, share them with others, and make better, more trusted business decisions. Historically, mixing together data from spreadsheets, databases, or applications like Marketo, Salesforce and Google Analytics has been an inaccessible capability for business people, as well as a data governance nightmare. Until now, self-service data prep tools have been designed for data scientists who work in silos of disconnected data – a phenomenon known as “data discovery sprawl”. These silos produce inaccurate and unreliable insights, and they don’t put those insights in the hands of business decision-makers. In the coming year, we will see business users choose modern tools that help them become data mixologists, making empowered decisions from trustworthy data sets. – Pedro Arellano, VP of Product Strategy, Birst

Cloud

The move to serverless architectures will become more widespread in the coming years, and will impact how applications are deployed and managed. Serverless architectures allow users to deploy code and run applications without managing the supporting infrastructure. Instead, the supporting infrastructure is managed by a third party. AWS’ cloud service Amazon Lambda is an example, and we anticipate growth in the number of providers and the breadth of enterprise-ready applications. As use of serverless architectures begin to rise, the overall application development and deployment strategy will begin to shift away from operations and more towards business logic. More cloud providers will also begin migrating to this form of architecture, allowing for a more competitive market with more expansive application support. As such, it will be important for database solution providers to be ‘cloud-ready.’ – Patrick McFadin, Chief Evangelist for Apache Cassandra, DataStax

The conversation around vendor lock-in is becoming much more prominent in senior level meetings, spurred on by many enterprises’ decision to move to the public cloud. To this point, the issue of vendor lock-in was initially discussed as a black or white situation. However, in 2017 we are going to see this conversation shift to acknowledge the many shades of gray, as executives realize and consider the varying degrees of lock-in and how it impacts various departments and levels of management. Examining the potential consequences of using proprietary technology on the different levels of the hardware and software stack will be an important issue within companies this year as more enterprises implement digital transformation initiatives. – Bob Wiederhold, CEO, Couchbase

Big data and the cloud will go hand-in-hand. Five years ago concerns over security and compliance kept enterprises from embracing big data in the cloud. Now, best practices and advancements in technology have allayed those concerns while the cloud’s agility and ease of use are becoming must-have’s for processing big data. As big data moves from an experiment to an organization-wide endeavor, the cost, time and resources needed to manage a massive data center don’t make sense. As a result, more and more companies will look to the cloud to help with the costs of data management. In 2017, expect enterprises to move their big data projects to the cloud in droves. – Ashish Thusoo, CEO, Qubole

2017 will be the year big data platforms go operational with the rise of hybrid clouds. We will see more customer cloud apps, such as Salesforce CRM and Oracle CX, accessing big data insights directly from on-premises big data platforms, which are the foundations of enterprises’ digital transformation and omni-channel marketing strategies. Examples of big data insights that support additional functional areas, such as sales and marketing, include predictive models, lead scoring or personalization. This typically starts with the ingestion of customer and marketing data into a data lake, where the source data is commonly stored in hybrid cloud and on-premises systems. And to operationalize those insights, we’ll see greater demand for standard REST interfaces to big data sets primarily accessible from SQL (such as Hive, Impala or Hawq) for hybrid connectivity from SaaS applications or cloud and mobile application development. For on-premises consumers of hybrid data, we expect hosted big data platforms such as IBM BigInsights on Cloud, Amazon EMR, Azure HDInsights or SAP Altiscale to run more big data workloads, not suitable for local data centers, in the cloud and sending only the insights to on-premises systems for core business operations. – Sumit Sarkar’s, Chief Data Evangelist, Progress

Big-Data-as-a-Service. Big Data continued to see rising adoption throughout 2016, and we’ve observed an increasing number of organizations that are transitioning from experimental projects to large-scale deployments in production. However, the complexity and cost associated with traditional Big Data infrastructure has also prevented a number of enterprises from moving forward. Until recently, most enterprise Hadoop deployments were implemented the traditional way: on bare-metal physical servers with direct attached storage. Big-Data-as-a-Service (BDaaS) has emerged as a simpler and more cost-effective option for deploying Hadoop as well as Spark, Kafka, Cassandra, and other Big Data frameworks. As the public cloud becomes a more common deployment model for Big Data, we anticipate many of these deployments shifting to BDaaS offerings in 2017. In addition to solutions offered by newer BDaaS vendors like BlueData and Qubole, we’ll see more initiatives from established public cloud players like AWS, Google, IBM, and Microsoft. We can also expect a range of other announcements that will further validate the trend toward BDaaS, including both major partnerships (such as VMware’s recent embrace of AWS) and acquisitions (SAP buying Altiscale). As the ecosystem expands, customers will have the flexibility to choose from a range of BDaaS solutions, including public cloud as well as on-premises and even hybrid options (e.g. compute in the cloud and data stored on-premises). – BlueData

Data Governance

The Chief Data Officer Moves to New Heights – In this past year, we’ve seen the Chief Data Officer emerge as an instrumental part of the organization’s plan to harness the full value of data for competitive advantage. In 2017 we will see this role evolve further with the acceleration of CDO hires across industries to help with competitive pressures, aggressive global regulations (things like GDPR and BCBS 239) and the general increasing speed of business. Gartner predicts that by 2019, 90% of large organizations will have a CDO. We see this happening much quicker with the CDO rising as data hero within the organization when faced with the new challenges of managing the big data overload dispersed in separate systems and data silos among specific groups and users enterprise-wide. Wearing a super cape, CDOs will figure out a way to break down the data unrest that likely exists today by implementing business-focused governance processes and platforms and enabling and empowering every user across the enterprise to use and capitalize on data for competitive advantage. – Stan Christiaens, co-founder and CTO of data governance leader Collibra

In 2017, the governance vs. data value tug of war will be front and center. Enterprises have a wealth of information about their customers and partners. Leaders are transforming their companies from industry sector leaders to data driven companies. Organizations are now facing an escalating tug of war between governance required for compliance, and the use of data to provide business value and implement security to avoid damaging data leaks and breeches. Financial services and heath care are the most obvious industries with customers counting in the millions with heavy governance requirements. Leading organizations will manage their data between regulated and non-regulated use cases. Regulated use cases data require governance; data quality and lineage so a regulatory body can report and track data through all transformations to originating source. This is mandatory and necessary but limiting for non-regulatory use cases like customer 360 or offer serving where higher cardinality, real-time and a mix of structured and unstructured yields more effective results. – John Schroeder, Chairman and Founder, MapR

Moore’s Law holds true for databases. Per Moore’s law, CPUs are always getting faster and cheaper. Of late, databases have been following the same pattern. In 2013, Amazon changed the game when they introduced Redshift, a massively parallel processing database that allowed companies to store and analyze all their data for a reasonable price. Since then however, companies who saw products like Redshift as datastores with effectively limitless capacity have hit a wall. They have hundreds of terabytes or even petabytes of data and are stuck between paying more for the speed they had become accustomed to, or waiting five minutes for a query to return. Enter (or reenter) Moore’s law. Redshift has become the industry standard for cloud MPP databases, and we don’t see that changing anytime soon. With that said, our prediction for 2017 is that on-demand MPP databases like Google BigQuery and Snowflake will see a huge uptick in popularity. On-demand databases charge pennies for storage, allowing companies to store data without worrying about cost. When users want to run queries or pull data, it spins up the hardware it needs and gets the job done in seconds. They’re fast, scalable, and we expect to see a lot of companies using them in 2017. – Lloyd Tabb, Founder, Chairman & CTO, Looker

The rise of “applied governance” to unstructured data. Earlier this year, more than 20,000 pages of top-secret Indian Navy data, including schematics on the their Scorpene-class submarines, were leaked. It’s been a huge setback for the Indian government. It’s also an unfortunate case study for what happens when you lack controls over unstructured information, such as blueprints that might be sitting in some legacy engineering software system. Now, replace the Indian Navy scenario with a situation involving the schematics for a Nuclear power plant or consumer IoT device, and the value of secure content curation becomes even more immeasurable. If unstructured blueprints and files are being physically printed or copied, or digitally transferred, how will you even know that content now exists? Tracking this ‘dark data’ – particularly in industrial environments – will be a top security priority in 2017. – Ankur Laroia, Leader – Solutions Strategy, Alfresco

Organizations have viewed data governance as a tax. It’s something you had to do for compliance or regulatory reasons, but it wasn’t adding value to the business. In reality, governance is crucial to driving business value. Think about the enormous amount of time and money being spent these days to harness the value of data – the whole Big Data movement. Organizations know there is tremendous value to be had, but many of them aren’t actually getting the value despite their investment. Gartner says: Through 2018, 80% of data lakes will not include effective metadata management capabilities, making them inefficient. Why? Two reasons: First, they don’t have the lineage and provenance of the data they’re analyzing. When they put bad or misleading data into their analysis, they’re going to get unreliable results back out. That’s a lack of data governance. Second, and perhaps even worse, organizations are afraid to share the data they’ve gone to great expense to create. They can’t answer questions such as: Under what agreements was the data collected? Which pieces are personal information? Who’s allowed to see it? In which geographies? With what redistribution rights? If you can’t answer these questions, you can’t share the data. Your data lake is fenced off. This is another failure of governance. Businesses will realize that governance gives them the highest quality results, that can be shared with the right audiences, and drive the greatest business value. – Joe Pasqua – EVP Products, MarkLogic

The Chief Data Officer position will pick up steam significantly. This is a sure sign of the pendulum swinging back: A company officer centrally managing the value of data. And a CDO’s job isn’t to empower analysts per se, although that will often be part of what they do. If that were all it was, companies could save a lot of money by handing out tools and not creating the CDO position. The CDO’s job is to extract maximum value from data. That can be done in many ways, including customer-facing portals, large-scale analytical apps, data feeds that stem from unified views of business entities, embedded BI inside other enterprise applications, and so on.So as the CDO position picks up steam, we can expect to see larger data-focused projects where information is managed and shared across divisional and even company boundaries, leading to better data monetization, lower per-user cost of data, and higher business value per unit of data. – Jake Freivald, Vice President, Information Builders

Data Science

In 2017 we will see an increased valuation of the critical thinking in the workplace, as people realize that there is not a deficit of data in the enterprise, but a deficit of insight. Companies will realize that data without additional tenets of knowledge or value, is both polarizing and damaging. The role of data scientist will evolve to become “the knowledge engineer.” We will see fewer “alchemists” – promising magic from data patterns alone, and more “chemists” — combining the elements of knowledge, data, context, and insight to deliver productivity enhancements that we have yet to imagine. – Donal Daly, CEO, Altify

We spend a lot of time thinking about what developers want & need in a tool, both right now and in the future. In software development, complexity is inevitable – tech stack, libraries, formats, protocols – and that complexity won’t be decreasing any time soon. The most successful tool is one that is simple, but not dumbed down or less powerful. I believe that tools will need to become even more powerful in 2017, and the successful tools will be ones that work for the developer rather than the other way around. Tools will need to be smarter to learn from the user automatically, proactive to inform the user automatically, collaborative to connect users with others, and visual and tangible to show and manipulate. This meta-increase in toolsets is possible now for a number of reasons. Memory, processing power, and connectivity speed continue to explode, while at the same time visual tools (like 4K screens) get better and better. Plus, the continued rise of social coding increases the need to powerful collaborative tools to support the developer. – Abhinav Asthana, CEO of Postman

2017 will be the “Year of the Data Scientist.” According to the McKinsey Global Institute, demand for data scientists is growing by as much as 12 percent a year and the US economy could be short by as many as 250,000 data scientists by 2024. Thanks to advances driven by AI companies in 2017, however, 2018 is when AI will become buildable – not just usable – but buildable by non-data scientists. This is not to say that data science will become less useful or in-demand post-2017, rather that some of the simpler problems will be solvable through a hyper-personalized AI built by someone who is not a data scientist. This will open up capabilities for coders and data scientists that will be mind-blowing. – Jeff Catlin, CEO, Lexalytics

SQL will have another extraordinary year. SQL has been around for decades, but from the late-1990s to mid 2000s, it went out of style as people started exploring NoSQL and Hadoop alternatives. SQL however, has come back with a vengeance. The renaissance of SQL has been beautiful to behold and I don’t even think it’s near it’s peak yet. The innovations we’re seeing are blowing our minds. BigQuery has created a product that is essentially infinitely scalable, the original goal of Hadoop, AND practical for analytics, the original goal of relational databases. Additionally, Google recently announced that the new version, BigQuery Standard SQL is fully ANSI compliant. Prior to this release, BigQuery’s Legacy SQL was peculiar and so presented a steep learning curve. BigQuery’s implementation of Standard SQL is amazing, with really advanced features like Arrays, Structures, and user-defined functions that can be written in both SQL and Javascript. SQL engines for Hadoop have continued to gain traction. Products like SparkSQL and Presto are popping up in enterprises and as cloud services because they allow companies to leverage their existing Hadoop clusters and cloud storage for speedy analytics. What’s not to love? To top it all off, companies like Snowflake, and now Amazon Athena, are building giant SQL data engines that query directly on S3 buckets, a source that was previously only accessible via command line. 2016 was the best year SQL has ever had — 2017 will be even better. – Lloyd Tabb, Founder, Chairman & CTO, Looker

The data skills gap widens. Problem: The demand for data scientists and data engineers continues to challenge enterprises who need to make the most of their data. And even when there are the right skillsets at play, the New York Times recently reported that these critical personnel are often spending more time cleaning the data than actually mining it. Prediction: Businesses will seek any tool that help to put more data in the hands of business analysts with the minimum data scientist intervention. In addition, new machine learning tools will emerge to help automate some of these data-focused tasks to scale the models that data scientists create. – SnapLogic

There will continue to be a shortage of qualified data scientists. I don’t expect the market to be in equilibrium until 2019 at the earliest. Every major university will have a data science program in place by 2017. – Michael Stonebraker, Ph.D., co-founder and CTO, Tamr

Data Scientists failed to predict the election—will they fail to predict your business? The other day I was giving a talk on ‘What is Machine Learning?’ and, barely two minutes in, someone said, ‘You’re saying we can do all these amazing things with big data and algorithms, but you had all the data in the world for the election, and you got it wrong. Why should we trust you?’ There are plenty of important takeaways from the election: First, Nate Silver and enterprise data scientists both try to learn from historical events to predict future events, and the margins of error can behigh in both. But in predicting an election you only get one chance. In business, you make predictions constantly, and the cost of error tends to be low. Also, there are fewer curve-balls in business. Customers and businesses tend to be pretty predictable. Voters and politicians are not. Second, the media committed the same sin we see business people make every day: falling too hard for the analytic ‘black box’ that does seemingly magical number crunching. Without a basic understanding of what types of analyses have been done on different types of data and why, the end users will never know the true value of the information they have at their disposal or how they should use it. There’s no better illustration of this than the little needle on The New York Times’ election ‘dial’ which bounced violently from Clinton to Trump in the middle of the evening and had me screaming at my phone. – Steven Hillion, Chief Product Officer, Alpine Data

GPUs and HPC

2017 will be the year when “accelerated compute” becomes known just simply as “compute”. This is a direct response to the use cases driving up utilization the most, and the explosion of accelerator availability in both the data center and the public cloud. As these use cases continue to ramp up in the Enterprise (particularly machine learning), we’ll see even more demand for computational accelerators. CPUs have been king for decades, and serve the general purpose quite well. But what we’re seeing now is an emphasis on deriving insight from data, versus just indexing it, and this requires orders of magnitude faster (and more specialized) resource in order to deliver feasible economics. It’s not that computational accelerators are necessarily “faster” than CPUs, but rather, they can be deployed as coprocessors and therefore take on very specialized identities. Because of this specialization, they can be programmed to do certain very discrete computations much quicker and at lower aggregate power consumption. Application developers and ISVs are pouncing on these capabilities (and their increasing availability) to create amazing new products and services. A good example of a red-hot technology in this space are GPU-accelerated databases, such as GPUdb from Kinetica (available as a turnkey workflow on the Nimbix Cloud). Rather than focusing on indexing massive amounts of information like a traditional RDBMS, it’s used to ingest fragments into memory for tremendously fast queries. In fact the queries are so fast that it blurs the line between analytics and machine learning (after all, machine learning involves processing massive data sets very quickly in order to create “models” that operate somewhat like human brains). Despite the advanced computing underneath, these tools serve traditional enterprise markets, not just “research labs”. Not only does its product name imply it, but the use case simply would be impossible without GPUs. This is a very real example of mainstream technology that demands computational accelerators. In talking with customers and business partners, the one common thread they all seek is more accelerated computational power (at reasonable economics) to do even more advanced things. I don’t see this trend slowing down anytime soon, which is why I’m predicting that we’ll drop the “accelerated” in front of “compute” as it will become a given. – Leo Reiter, CTO, Nimbix

Graphical Processing Units (GPUs) are capable of delivering up to 100-times better performance than even the most advanced in-memory databases that use CPUs alone. The reason is their massively parallel processing, with some GPUs containing over 4,000 cores, compared to the 16-32 cores typical in today’s most powerful CPUs. The small, efficient cores are also better suited to performing similar, repeated instructions in parallel, making GPUs ideal for accelerating the compute-intensive workloads required for analyzing large streaming data sets in real-time. – Eric Mizell, Vice President, Global Solutions Engineering, Kinetica

Amazon has already begun deploying GPUs, and Microsoft and Google have announced plans. These cloud service providers are all deploying GPUs for the same reason: to gain a competitive advantage. Given the dramatic improvements in performance offered by GPUs, other cloud service providers can also be expected to begin deploying GPUs in 2017. – Eric Mizell, Vice President, Global Solutions Engineering, Kinetica

Hadoop

As I predicted last year, 2016 was not a good year for Hadoop and specifically for Hadoop distribution vendors. Hortonworks is trading at one-third its IPO price and the open source projects are wandering off. IaaS cloud vendors are offering their own implementations of the open source compute engines – Hive, Presto, Impala and Spark. HDFS is legacy in the cloud and is rapidly being replaced by blob storage such as S3. Hadoop demonstrates the perils of being an open source vendor in a cloud-centric world. IaaS vendors incorporate the open source technology and leave the open source service vendor high and dry. Open source data analysis remains a complicated and confusing world. Wouldn’t it be nice if there were one database that could do it all? Wait, there is one, it’s called Snowflake. – Bob Muglia, CEO, Snowflake Computing Inc.

Don’t be a Ha-dope! For all those folks running around saying Hadoop is dead – they’re dead wrong. In 2017, we’re going to see an increased adoption of Hadoop. So far this year, I haven’t talked to a single organization with a digital data platform who doesn’t see Hadoop at the center of their infrastructure. Hadoop is an assumed part of every modern data architecture and nobody can question the value it brings with its flexibility of data ingestion and its scalable computational power. Hadoop is not going to replace other databases but it will be an essential part of data ingestion in the IoT/digital world. – George Corugedo, CTO, RedPoint Global

Hadoop distribution vendors will have crossed the chasm — unstructured data in Hadoop is a reality. But, since the open source problem has not been addressed, they aren’t making much money. As such, there will be an acquisition of many of these vendors by bigger players. As well as the idea that bigger ISV Hadoop vendors will band together and create larger entities in hopes of capitalizing on the economy of scale. – Joanna Schloss, Director of Product Marketing, Datameer

The Failure (and future) of Hadoop. Problem: Fifty percent of Hadoop deployments have failed. While it’s commanded the lion’s-share of attention, it’s suffered from product overload. Because new projects are added every month and the nature of the data in the Hadoop cluster is ever-growing, it’s created a complex, multidimensional environment that’s difficult to maintain in production. Prediction: To actually make Hadoop work beyond a test environment, enterprises will shift it to the cloud in 2017, and abstract storage from compute. This enables enterprises to select the tools they want to use (Spark, Flink or others) instead of being forced to carry excessive Hadoop baggage with them. – SnapLogic

In-Memory Computing

In 2017, in-memory computing will enter the mainstream as the enabling technology for adding operational intelligence to live systems, and it will supplant legacy streaming technologies. In 2017, the adoption of in-memory computing technologies, such as in-memory data grids (IMDGs), will provide the enabling technology to capture perishable opportunities and make mission-critical decisions on live data. Driven by the need for real-time analytics, the IMDG market alone – currently estimated at $600 million – will exceed $1 billion by 2018, according to Gartner. Unlike big data technologies, such as Spark, created for the data warehouse and legacy streaming technologies, in-memory computing enables the straightforward modeling and tracking of a live system by analyzing and correlating persistent data with live fast-changing data in real time, and it provides immediate feedback to that system for automated decision making. Gartner has recently elevated the term “digital twin” in its recent Top 10 strategic technology trends for 2017 to describe the shift in focus from data streams to the data sources which produce those streams. In-memory computing technology enables applications to easily create and manage digital representations of real-world devices, such as Industrial Internet of Things (IIoT) sensors and actuators, and this enables real-time introspection for operational intelligence. – Dr. William Bain, CEO and founder, ScaleOut Software

In-Memory and Temporary Storage become more important as new sources of data growth such as augmented and virtual reality, AI and machine learning become popular: While analyzing these new sources of data is becoming critical to long-term business goals, storing the data long term is both impractical and unnecessary when the results of analysis are more important than the data itself. Although 2017 will see plenty of data growth that will require permanent storage, most net new data generated next year will be ephemeral; it will quickly outlive its usefulness and be discarded. So despite exponential data growth, there won’t be as much storage growth as we might otherwise have expected. – Avinash Lakshman, CEO, Hedvig

IoT

The future of IoT will be focused on security. Recently, a major DDoS attack caused outages at major organizations. This is going to be a growing issue in the near future, and the concern at the forefront of IoT will be safeguarding networks and connected devices. – Dr. Werner Hopf, CEO and Archiving Principal, Dolphin Enterprise Solutions Corporation

IOT grows up – The enterprise has paid attention to IOT for some time, though this year will be the year we move past the “wow” phase and into the “how do we do we securely and effectively bring IOT to the enterprise, how do we handle the high speed data ingest, and how do we optimize analytics and decisions based on IOT data.” Those will be the questions enterprises will need to solve in 2017. – Leena Joshi, VP of Product Marketing, Redis Labs

IoT continues to pose a major threat. In late 2016, all eyes were on IoT-borne attacks. Threat actors were using Internet of Things devices to build botnets to launch massive distrubted denial of service (DDoS) attacks. In two instances, these botnets collected unsecured “smart” cameras. As IoT devices proliferate, and everything has a Web connection — refrigerators, medical devices, cameras, cars, tires, you name it — this problem will continue to grow unless proper precautions like two-factor authentication, strong password protection and others are taken. Device manufactures must also change behavior. They must scrap default passwords and either assign unique credentials to each device or apply modern password configuration techinques for the end user during setup. – A10 Networks

The Internet of Things (IoT) is widely acknowledged as a big growth area for 2017. More connected devices will create more data, which has to be securely shared, stored, managed and analyzed. As a result, databases will become more complex and the management burden will increase. Those organizations which can most effectively monitor their database layer to optimize peak performance and resolve bottlenecks will be more strongly placed in a better position to exploit the opportunities the IoT will bring. – Mike Kelly, CTO, Blue Medora

The future of retirement is gearing up for a major shift and Internet of Things (IoT) along with it. Baby boomers are retiring, and there are many economic and lifestyle reasons for them to live in their homes longer. This means changes for insurance companies, healthcare, medical devices, and appliance manufacturers. The proliferation of the IoT or “the connected life” allows for monitoring the elderly in their homes, from monitoring blood pressure to typical daily habits such as whether or not they turned on the TV or opened the refrigerator. Elderly parents want autonomy and their children want them to be safe – connected technology can bridge the gap between the two. Basic monitoring as well as more advanced medical monitoring is shifting the way we will live out our retirement. – Kevin Petrie, Attunity

The Internet of Things (IoT) is still a popular buzzword, but adoption will continue to be slow. Analyzing data from IoT and sensors clearly has the potential for massive impact, but most companies are far (FAR!) from ready. IoT will continue to get lots of lip service, but actual deployments will remain low. Complexity will continue to plague early adopters that find it a major challenge to integrate that many moving parts. Companies will instead focus resources on other low-hanging fruit data and analytics projects first. – Prat Moghe, Founder and CEO, Cazena

The Internet of Things is delivering on the promise of big data. IoT will deliver on the promise of big data. Increasingly, big data projects are going through multiple updates in a single year – and the Internet of Things (IoT) is largely the reason. That’s because IoT makes it possible to examine specific patterns that deliver specific business outcomes, and this has to increasingly be done in realtime. This will drive a healthier investment, and faster return in big data projects. – Ettienne Reinecke, Chief Technology Officer, Dimension Data

Next year, organizations will stop putting IoT data on a pedestal, or, if you like, in a silo. IoT data needs to be correlated with other data streams, tied to historical or master data or run through artificial intelligence algorithms in order to provide business-driving value. Despite the heralded arrival of shiny new tools that can handle IoT’s massive, moving workloads, organizations will realize they need to integrate these new data streams into their existing data management and governance disciplines to gain operational leverage and ensure application trust. – Girish Pancha, CEO and Founder, StreamSets

The Internet of Things Architect role will eclipse the data scientist as the most valuable unicorn for HR departments. The surge in IoT will produce a surge in edge computing and IoT operational design. 1000s of resumes will be updated overnight. Additionally, fewer than 10% of companies realize they need an IoT Analytics Architect, a distinct species from IoT System Architect. Software architects who can design both distributed and central analytics for IoT will soar in value. – Dan Graham, Internet of Things Technical Marketing Specialist, Teradata

At Least one Major Manufacturing Company will go belly up by not utilizing IoT/big data: The average lifespan of an S&P 500 company has dramatically decreased over the last century, from 67 years in the 1920s to just 15 years today. The average lifespan will continue to decrease as companies ignore or lag behind changing business models ushered in by technological evolutions. It is imperative that organizations find effective ways to harness big data to remain competitive. Those that have not already begun their digital transformations, or have no clear vision for how to do so, have likely already missed the boat—meaning they will soon be a footnote in a long line of once-great S&P 500 players. – Ashley Stirrup, CMO, Talend

Machine Learning

In-memory computing techniques will leverage the power of machine learning to enhance the value of operational intelligence. The year 2017 will see an accelerated adoption of scenarios that integrate machine learning with the power of in-memory computing, especially in e-commerce systems and the Internet of Things (IoT). E-commerce applications benefit by offering highly personalized experiences created by tracking and analyzing dynamic shopping behavior. IoT applications, such as those associated with windmills and solar arrays, benefit by delivering predictive feedback based on rapidly emerging patterns. In both of these applications, machine learning techniques can dramatically deepen the introspection and enhance operational intelligence. Once only practical only on supercomputers, machine learning techniques have evolved to become increasingly available on standard, commodity hardware. This enables IMDGs to apply them to the analysis of fast changing data and specifically to dynamic digital models of live systems. The ability of IMDGs to perform iterative computation in real-time and at extreme scale enables machine learning techniques to be easily integrated into stream processing which provides operational intelligence. – Chris Villinger, Vice President, Business Development and Marketing, ScaleOut Software

Machine learning will change the fabric of the enterprise – Machine learning will enable the adaptive enterprise, one that aligns business outcomes and customer needs in new and different ways. – Leena Joshi, VP of Product Marketing, Redis Labs

In 2017, I expect to see an increased emphasis on democratization of machine learning and artificial intelligence (AI). We’ve seen machine learning evolve from IBM Watson a few years ago to most recently with Salesforce and Oracle. While many think machine learning has gone mainstream, there is the potential for much more, such as performance monitoring and intelligent alerting. While companies might face false starts and initial mishaps while trying to crack the code, the increased number of organizations turning to AI and machine learning will lead to more successes next year. This increased adoption will help bring innovations faster to market, especially from a wide range of industries. – Mike Kelly, CTO, Blue Medora

There has been a lot of hype around machine learning for some time now, but in most cases it hasn’t been used very effectively. As we move forward, organizations are learning how to bring together all the ingredients needed to leverage machine learning – and I think that’s the story for 2017. We’ll see machine learning move from a mystical, over-hyped holy grail, to seeing more real-world, successful applications. Those who dismiss it as hocus-pocus will finally understand it’s real; those who distrust it will come to see its potential; and companies that are poised to leverage this capability for appropriate, practical applications will be able to ride the swell. It will still be a few years before machine learning becomes a tidal wave, but in 2017 it will be clear that it has a credible place in the business toolkit. – Jeff Evernham, Director of Consulting, North America, Sinequa

In 2017, ‘centralized-only’ monolithic software and silos of data disappear from the enterprise. Smart devices will collaborate and analyze what one another is saying. Real time machine-learning algorithms within modern distributed data applications will come into play – algorithms that are able to adjudicate ‘peer-to-peer’ decisions in real time. Data has gravity; it’s still expensive to move versus store in relative terms. This will spur the notion of processing analytics out at the edge, where the data was born and exists, and in real-time (versus moving everything into the cloud or back to a central location). – Scott Gnau, Chief Technology Officer, Hortonworks

Machine Learning will become de rigeur in the enterprise without many even noticing: What’s unique to today’s machine learning technology is that much of it originated and continues to be open source. This means that many different products and services are going to build machine learning into their platforms as a matter of course. As a result, more enterprises will be adopting machine learning in 2017 without even knowing they’re doing it because vendors are actively using ML to make their products smarter. Even existing products will soon use some variety of machine learning that will be delivered via an update or as an extra perk. – Avinash Lakshman, CEO, Hedvig

The Future of Machine Learning. We will finally deliver on the promise of machine learning: building models that can directly suggest or take action for large audiences. When we effectively scale machine learning, we can greatly increase the action-taking bandwidth of an enterprise. Instead of presenting a small number of business users in the enterprise with historical statistics à la business intelligence, companies can bring specific recommendations to thousands of front-line individuals responsible for taking action on behalf of the business. – Josh Lewis, VP of Product, Alpine Data

Machine learning-washing – Expect the market to be flooded with solutions that promise machine learning capabilities and grab headlines, but deliver no substance. – Toufic Boubez, VP Engineering, Machine Learning, Splunk

NoSQL

In 2017, NoSQL’s coming of age will be marked by a shift to workload-focused data strategies, meaning executives will answer questions about their business processes by examining the data workloads, use cases and end results they’re looking for. This mindset is in contrast to prior years when many decisions were driven from the bottom up by a technology-first approach, where executives would initiate projects by asking what types of tools best serve their purposes. This shift has been instigated by data technology, such as NoSQL databases, becoming increasingly accessible. – Adam Wray, CEO, Basho Technologies

Security

Cloud and data security agility will gain further importance — This is a rather obvious prediction, given the phobia of data breaches and the reticence of industries such as the financial sector to use public cloud technologies. Meanwhile, life sciences and retail, to name two industries, continue to forge ahead, realizing efficiencies while adhering to some of the strictest privacy and governance requirements set forth by regulators. With requirements such as the General Data Protection Regulation (GDPR) now in effect, companies not only have to ensure that their data is physically housed in the right geographic centers, but that the access complies with the most stringent regulations related to personal access and approvals for use of that data. Many vendors are now taking steps to provide the most secure, validated and agile infrastructure possible. Partnerships and use of Amazon Web Services, Google Cloud, and Microsoft Azure go a long way to providing the confidence and flexibility that many companies are looking for. In 2017, vendors offering Platform as a Service (PaaS) and tools themselves must also do their part in complying to Service Organization Control (SOC) types, as well as in the case of healthcare data, HITRUST (Health Information Trust Alliance), that provides an established security framework that can be used by all organizations that create, access, store or exchange sensitive and regulated data. – Ramon Chen, CMO, Reltio

Under the covers, machine learning is already becoming ubiquitous as it is embedded in many services that consumers take for granted. Increasingly, machine learning is becoming embedded in enterprise software and tooling for integrating and preparing data. Machine learning is placing a stress on enterprises to make data science a team sport; a big area for growth in 2017 will be solutions that spur collaboration, so the models and hypotheses that data scientists develop do not get bottled up on their desktops. – Ovum

Expect IoT to be even more vulnerable. Previous hacks into connected devices can be deemed as minor or inconvenient. But the recent DDoS attack involving Dyn shows IoT hacks are taking place on a larger and more disruptive scale. Hacking lightbulbs or setting off fire alarms is on the more mischievous side of the spectrum, but having the ability to override a car’s brake system or a “smart” pacemaker, for example, can turn connected devices into deadly weapons. Even worse, the lack of one standard for IoT (unlike Wi-Fi) will just make our devices more susceptible to large-scale breaches. Vendors have to recognize the parallels between security issues when Wi-Fi hit the mass market, and what’s happening with IoT. If they don’t move quickly to address the vulnerabilities, government regulations will need to come into play. Still, it would take something disastrous to galvanize government into action. – Richard Walters, SVP of Security Products, Intermedia

Over the past year there has been increased focused on data privacy, especially with the passing of the GDPR which represented one of the most comprehensive and refined set of standards put forth to date. In 2017, the trend line will to continue to move in the same direction and there will be a higher premium on data protection. With increased sensitivity around personal data, software vendors and enterprises will need to focus on what is being done to protect and manage personal data within the enterprise. To be successful companies must embrace privacy by design for themselves and the service providers they work with.” – Anthony West, CTO, Actiance

Spark

Spark and machine learning light up big data. In a survey of data architects, IT managers, and BI analysts, nearly 70% of the respondents favored Apache Spark over incumbent MapReduce, which is batch-oriented and doesn’t lend itself to interactive applications or real-time stream processing. These big-compute-on-big-data capabilities have elevated platforms featuring computation-intensive machine learning, AI, and graph algorithms. Microsoft Azure ML in particular has taken off thanks to its beginner-friendliness and easy integration with existing Microsoft platforms. Opening up ML to the masses will lead to the creation of more models and applications generating petabytes of data. In turn, all eyes will be on self-service software providers to see how they make this data approachable to the end user. – Dan Kogan, director of product marketing at Tableau

Analytics will experience a revolution in 2017. In the past, conversations about big data always included Hadoop (HDFS). But the industry today has hit a wall with its limitations to back up and preserve big data. As a result big data has become a black hole in the HDSFS cluster with no one managing it. In 2017, the Spark operating model – through ‘in memory analytics’ – will become a popular Big Data analytics option due to its ability to significantly reduce data movement and allow analytics to occur much earlier and faster in the process. – Vincent Hsu, VP, IBM Fellow, CTO for Storage and Software Defined Environment, IBM

Storage

People may think backup and recovery is dead, but they are sorely misunderstood and the move to the cloud actually makes backup and recovery more important than ever to safeguard data. Relying on the cloud won’t take care of everything! The need for backup and recovery will become very real as organizations continue betting on enterprise applications. Moreover, backup and recovery will take center stage as IT Ops and others in organizations have never stopped worrying about recovery, particularly as companies aggressively move toward modernized application and data delivery and consumption architectures. The likelihood of not knowing how to address or who to turn to in the event of an outage is just too great a risk. – Tarun Thakur, Co-founder and CEO at Datos IO

The Rise of the JBOD. In 2017, more users will come to understand that the storage for their scale-out nodes — whether you call it software-defined, “server SAN,” DAS, hyperconverged, whatever — can be attached externally to servers instead of buying servers with lots of disks and SSDs, without losing any of the performance or ease-of-use of internal DAS. Using simple, dumb, industry standard SAS JBODs (Just a Bunch Of Disks) means not having to throw away your storage when you upgrade your servers and vice-versa. It also gives you better flexibility and density in your deployments. – Tom Lyon, Chief Scientist, DriveScale

Verticals

One of the ongoing challenges in using big data to improve outcomes in healthcare has been its siloed natured. Healthcare providers have detailed clinical (patient) data within their organizations, while health insurers (payers) have more general claims data that goes across many providers. That is beginning to change, though, as the move to value-based care is encouraging providers and health payers to share their data to create a more complete picture of the patient. The latest trend is to bring in additional behavioral data, such as socio-economic and attitudinal data, to create more of a 360 degree view of not only what patients do but also what drives them to do it. Much as Facebook and Amazon.com use behavioral data to match users to relevant content. By applying next-generation analytics to this larger dataset, providers and payers can work together to help patients become healthier and stay healthy, reducing costs while helping them lead happier, more productive lives. – Rose Higgins, President, SCIO Health Analytics

We’ll usher in the next iteration of personalized care. Increased self-tracking, preventative care efforts, and advances in data science will give us more information on patients than ever before. We’ll use this data to create highly individual portraits of patients, that in turn, enable us to match physicians to patients in a very specific way. We can assign physicians based on their past success in treating similar patients and enable patients to have more informed and personal care. – Mark Scott, Chief Marketing Officer, Apixio

Data Analytics will go vertical (financial, medical, etc), and companies that build vertical solutions will dominate the market. General-purpose data analytics companies will start disappearing. Vertical data analytics startups will develop their own full-stack solutions to data collection, preparation and analytics. – Ihab Ilyas, co-founder of Tamr and Professor of Computer Science at the University of Waterloo

Big Data Will Transform Every Element of the Healthcare Supply Chain: The entire healthcare supply chain has been being digitized for the last several years. We’ve already witnessed the use of big data to improve not only patient care, but also payer-provider systems, reducing wasted overhead, predict epidemics, cure diseases, improve the quality of life and avoid preventable deaths. Combine this with the mass adoption of edge technologies to improve patient care and wellbeing such as wearables, mobile imaging devices, mobile health apps, etc. However, the use of data across the entire healthcare supply chain is about to reach a critical inflection point where the payoff from these initial big data investments will be bigger and come more quickly than ever before. As we move into 2017, healthcare leaders will find new ways to harness the power of big data to identify and uncover new areas for business process improvement, diagnose patients faster as well as drive better more personalized, preventative programs by integrating personally generated data with broader healthcare provider systems. – Ashley Stirrup, CMO, Talend

Author:  Daniel Gutierrez

Source:  http://insidebigdata.com/2016/12/21/big-data-industry-predictions-2017

Over the past few years we have seen a surge in cyber attacks against well-known organizations, each seemingly larger than the last. As cybercriminals look for innovative ways to penetrate corporate infrastructures, the challenges for brand owners to protect their IP has steadily grown. Fraudsters will stop at nothing to profit from a corporate entity’s security vulnerabilities, and the data they steal can fetch a hefty price in underground online marketplaces.

Whether it is a company with a large customer base that accesses and exchanges financial or personal information online, or a small brand that has IP assets to protect, no company is exempt. While banking and finance organizations are the most obvious targets, an increasing number of attacks are taking place on companies in other industries, from healthcare and retail to technology, manufacturing and insurance companies. Data breaches can have a damaging impact on a company’s internal IT infrastructure, financial assets, business partners and customers, to say nothing of the brand equity and customer trust that companies spend years building.

Battlegrounds: Deep Web and Dark Web

A common analogy for the full internet landscape is that of an iceberg, with the section of the iceberg above water level being the surface web, comprised of visible websites that are indexed by standard search engines. It is what most people use every day to find information, shop and interact online, but it accounts for only about four percent of the Internet.

The remaining sites are found in the Deep Web, which includes pages that are unindexed by search engines. A large proportion of this content is legitimate, including corporate intranets or academic resources residing behind a firewall.

However, some sites in the Deep Web also contain potentially illegitimate or suspicious content, such as phishing sites that collect user credentials, sites that disseminate malware that deliberately try to hide their existence, websites and marketplaces that sell counterfeit goods, and peer-to-peer sites where piracy often takes place. Consumers may unknowingly stumble upon these and are at risk of unwittingly releasing personal information or credentials to fraudulent entities.

Deeper still is the Dark Web, a collection of websites and content that exist on overlay networks whose IP addresses are completely hidden and require anonymizer software, such as Tor, to access. While there are a number of legitimate users of Tor, such as privacy advocates, journalists and law enforcement agencies, its anonymity also makes it an ideal foundation for illicit activity. Vast quantities of private information, such as log-in credentials, banking and credit card information, are peddled with impunity on underground marketplaces in the Dark Web.

Waking up to the Threats

The Deep Web and Dark Web have been in the public eye for some time, but in recent years, fraudsters and cybercriminals have been honing their tactics in these hidden channels to strike at their prey more effectively and minimize their own risk of being caught. The anonymity in the Dark Web allows this medium to thrive as a haven for cybercriminals, where corporate network login credentials can be bought and sold to the highest bidder, opening the door to a cyberattack that most companies are unable to detect or prevent.

While Deep Web sites are not indexed, consumers may still stumble upon them, unaware they have been redirected to an illegitimate site. The path to these sites are many: typosquatted pages with names that are close matches to legitimate brands; search engine ads for keywords that resolve to Deep Web sites; email messages with phishing links; or even mobile apps that redirect.

Moreover, as a higher volume of users learn the intricacies of Tor to access and navigate the Dark Web, the greater the scale of anonymity grows. More points in the Dark Web’s distributed network of relays makes it more difficult to identify a single user and track down cybercriminals. It’s like trying to find a needle in a haystack when the haystack continues to get larger and larger.

The Science and Strategy Behind Protection

Brands can potentially mitigate abuse in the Deep Web, depending on the site. If a website attempts to hide its identity from a search engine, there are technological solutions to uncover and address the abuse. Conventional tools commonly used by companies to protect their brands can also tackle fraudulent activity in the Deep Web, including takedown requests to ISPs, cease and desist notices and, if required, the Uniform Domain-Name Dispute-Resolution Policy (UDRP).

As for the Dark Web, where anonymity reigns and the illicit buying and selling of proprietary and personal information are commonplace, companies can arm themselves with the right technology and threat intelligence to gain visibility into imminent threats. Actively monitoring fraudster-to-fraudster social media conversations, for example, enables companies to take necessary security precautions prior to a cyberattack, or to prevent or lessen the impact of a future attack. In the event of a data breach where credit card numbers are stolen, threat intelligence can help limit the financial damage to consumers by revealing stolen numbers before they can be used and have them cancelled by the bank.

Technology can even help identify and efficiently infiltrate cybercriminal networks in the Dark Web that might otherwise take a considerable amount of manual human effort by a security analyst team. Access to technology can significantly lighten the load for security teams and anchor a more reliable and scalable security strategy.

In light of so many cyber threats, it falls to organizations and their security operations teams to leverage technology to identify criminal activity and limit financial liability to the company and irreparable damage to the brand.

Key Industries at Risk

A growing number of industries are now being targeted by cybercriminals, but there are tangible steps companies can take. For financial institutions, visibility into Dark Web activity yields important benefits. Clues for an impending attack might potentially be uncovered to save millions of dollars and stop the erosion of customer trust. Improved visibility can also help companies identify a person sharing insider or proprietary information and determine the right course of action to reduce the damage.

In the healthcare industry, data breaches can be especially alarming because they expose not only the healthcare organization’s proprietary data, but also a vast number of people’s medical information and associated personal information. This could include images of authorized signatures, email addresses, billing addresses and account numbers. Cybercriminals who use information like this can exploit it to compromise more data, such as social security numbers and private medical records. Credentials could even potentially lead to identities being sold.

Conclusion

Most organizations have implemented stringent security protocols to safeguard their IT infrastructure, but conventional security measures don’t provide the critical intelligence needed to analyze cyberattacks that propagate in the Deep Web and Dark Web. It is fundamentally harder to navigate a medium where web pages are unindexed and anonymity can hide criminal activity.

Meanwhile, cyberattacks on organizations across a wider number of sectors continue to surge, putting proprietary corporate information, trade secrets and employee network access credentials at risk. Businesses need to be aware of all threats to their IP in all areas of the Internet. Leveraging every available tool to monitor, detect and take action where possible is vital in addressing the threats that these hidden regions of the internet pose.

Author:  Charlie Abrahams

Source:  http://www.ipwatchdog.com/2016/12/14/brand-protection-deep-dark-web/id=75478

Friday, 16 December 2016 02:40

Surprising Ways to Reach Google

Most the time, we use Google as a basic search engine to look up different events, places and questions. As a business owner, we want to make sure that our business is easy to find and has our accurate information. The process to get your business listed, updated or corrected on Google can take up to weeks! Then there is always the possibility that something might go wrong. Throughout this blog post, I will take you through different situations that can occur with having your business listed on Google and the exciting avenues you can take to talk with Google.

Is Your Google Listing Verified?

All businesses need to be verified through Google+ to appear in the search engine as a credited source. The verification process includes having Google email, call or mail a four-digit code to you that you enter under the ‘My Business’ App. Below is a picture of an ‘Google Approved’ business

verify your google listing

Once your listing is updated it’s a good idea to ensure all your information is correct. Beware that it may take some time to for your changes to show up. Here are some issues you’ll want to be aware of:

  • Updating Information. If you change your phone number or address, it could take Google up to weeks before the information is reflected correctly on your Google Account.
  • Missing Information. Even though you have added all the correct information, it still might not be showing up everywhere correct on Google Search, Google Maps or Google +.
  • Duplicate Address. If you have two businesses with the same address, Google will not approve the second address. Google only recognizes one business per location. For example, if you were BMW Manhattan Car Dealership and had Motorcycles at your location; Google would not approve for you to be both BMW Manhattan Car and BMW Manhattan Motorcycle at the same location.
  • Duplicate Accounts. If you do not own a listing, then you cannot delete a business listing. This means that duplicate accounts for the same business could appear. They won’t have the same information, but that means you are losing business because they are receive information from an incorrect account.

How to Fix Your Listing

Now that we have looked at the reasons why we might need to reach out to Google, let’s look at the different and exciting ways that you can communicate with the large conglomerate.

  1. Google Help Center: If you do not have time to talk with anyone at Google, you will be sent to the Google Help Center. Here, google has a list of frequently asked questions and corresponding articles to help you solve your problems. They also have step by step how to videos!
  1. Online Form: Google also has a form for you to fill out under the Support Section. With this detailed form, you can highlight the incorrect information and they will fix it for you. Take a look at a part of the form below:

fix your google listing

  1. Live Chat: With this option, you are set up in a chat room to instant message a Google representative. I have found this to be very useful for simple changes, such as requesting they update your information or adding a new member to a google account.
  1. Phone: The final and most exciting way to reach Google is to give Google a call. Yes, you can call Google! After filling out the same form mentioned above, you can click the “call now” button and Google will give you a call. The Google Representatives are trained handle everything for you from validating your listing to fixing duplicate accounts.

STREAM KICK START STEP: Take action and getting started by Googling your business to make sure that all your information is coming up and that your listing is verified. If its listed but does not have the verified check mark or has incorrect information, log into Google My Business to start the process of updating your listing!

Auhtor : Krista Meiers 

Source : http://www.business2community.com/online-marketing/surprising-ways-reach-google-01723775#L8d08XuGIQO67BBL.97

With trends like ride sharing, autonomous vehicles, and the connected car, the auto industry is increasingly in the spotlight. As drivers contemplate letting computers take over control of the wheel for them, it brings up some important questions. What will cars of the future look like? What things will drivers be able to accomplish on their rides to work? And most importantly, what cool features will they be able to enjoy now that their attention doesn’t have to be on the road?

1. No parking skills? No need to fret

Parking sucks, especially the dreaded parallel. It’s often tricky in congested areas, it sometimes leads to smashed alloy wheels and it’s deeply embarrassing when not done correctly, which is why most are happy to hand over valet duties to a robot. Ford, Renault and many premium brands already own a system that will hunt down parallel and reverse parking spots and then use sensors and cameras to correctly steer the vehicle into the space, only calling upon a human for throttle inputs.

But things are about to get a whole lot easier, as BMW and Mercedes-Benz now boast tech that simply requires a prod of a smartphone for perfect parking results. BMW’s Remote Control Parking is already on the 7 Series  —  and due to be rolled out on more models next year — and sees the car autonomously reverse into and pull out of spaces, while Mercedes’ Remote Parking Pilot does a similar thing but also caters for perpendicular parking. The latter will appear on the new E-Class, which is due out late this year or early 2017.

2. Connected from the road to the kitchen

When your car knows to open the garage door and turn the AC on as you head down the road, you know you’ve hit peak connectivity. The ease of access for drivers as cars become a tool to become your personal assistant is rapidly advancing. The latest multimedia systems allow for emails to be read and sent, hands-free calls to be made and Twitter to be updated on the move by some of the largest car manufacturers like Nissan. Some even know to power themselves!

The cars of the future will be an extension of your home. As the auto industry combines to meld with the IoT revolution, we’ll see connectivity that we’ve never had before. Wouldn’t it be great to record your favorite television show when you’re running late by communicating with your vehicle? The cars of the future and you will end up being quite the team. Can’t wait or don’t want to buy a new car? Adapters from companies like Autobrain, Automatic and Vinli will turn your car (as long as it’s built after 1996) into the 4G connected, Wi-Fi enabled, connected car of the future.

3. A mobile living room

When car owners are no longer required to keep their eyes on the road and hands on the wheel because computers are in the driver’s seat, the journey will be just as important as the destination. To the discerning 21 century mediaphile, this means HD screens, on-demand content streaming and one-kick ass, next-generation audio system to experience it with, just like one might in their living room but with the bonus of a smaller space and killer surround sound. Companies such as Auro-3D have partnered with companies like Porsche to introduce three-dimensional spatial sound patterns that replicate real-life sound experiences that are reminiscent of the best concert halls, but all in the comfort of your own car. This set up delivers the best-possible music playback to make every trip a new driving experience, not just a ride.

4. Goodbye dials, hello gestures

Why touch, when you can wave? Rear-view mirrors, radios, and more are moving away from the antiquated dial system to understand hand gestures through infrared cameras. Touch screens are increasingly becoming the easiest way to communicate with your vehicle over fumbling with dial switches. But the cars of the futures don’t want to have you even deal with potential smudges to that chrome finish. Thanks to leadership from Audi and Volvo, in efforts to de-clutter the dashboard to make you safer and more efficient, we’re going to see even touch screens get the boot as swipes and gestures will be the simplest and safest way to control functionality. Wave goodbye to those dials.

5. Never lose your keys again

We’ve seen in recent years the shift from key to keyless entry but next-generation cars take this one step further by completely removing them altogether. In the future, drivers will be able to unlock and start their cars using a fingerprint, retina scan or voice activation—similarly to how we access our smartphones today. And with how much time drivers save by not tearing the house apart looking for lost keys, they might be able to finish that book or learn a new language—or not. Plus, you’ll never have to worry about your teenager taking your car out without permission ever again. “Open the driver door, Tesla!” “I’m sorry Dave, I can’t do that.”

With all the cool new car technology on the horizon, it’s enough to make anyone want to give up public transit to commute in bumper-to-bumper traffic to catch up on shows, listen to the hottest new album release or just hang out with friends.

Author:  SPENCER MACDONALD

Source:  http://readwrite.com/2016/12/13/5-futuristic-car-technologies-that-are-available-now-or-heading-your-way-tl1

Dr Irengbam Mohendra Singh

Technological breakthrough is always fun as everyone gets his or her deserts in due course, like Eastern Indian Railways are coming to Manipur by 2020 after 149 years, while driverless cars will be on the global market by 2020. Self-driving computerised cars with artificial Intelligence, especially with “deep learning” (ability of computers to use logarithms to solve problems) are taking humans out of equation. Artificial Intelligence provides the autonomous cars with real time decisions and human perceptions to control actions, such as acceleration, steering, brakes, stopping at traffic lights and changing lanes.

Western scientific innovations will continue, altering the world we live in. Driverless cars will take over any jobs that require drivers, such as taxi driving, public transportation, long-haul trucking. In the US, ‘Peloton Technology’ (automated vehicle linking for safety and fuel efficiency) is working with freight companies to ensure fleets of lorries can travel as if connected by a digital tow bar.

UK has invested millions of pounds in its own research, including trials of driverless pods (small vehicles) to see how the technology interacts with the environment and other road users. In October 2016, Britain had its first driverless car tested at Milton Keynes. More trials will be conducted later at various locations in the UK. The tests will last for about 20 months while analysing the legal and insurance applications. Volvo of Sweden is planning to test a fleet of 100 semi-autonomous vehicles in Gothenburg in 14 month’s time. In America normal sized cars have been tested by Google in California.

This new revolution has followed in the heels of electric cars in the UK. By October 2016, more than 63,00 electric cars have been registered. These cars are small and batteries have to be charged at charging stations along the route. It’s suggested that the driverless car revolution will give the world economy a massive boost. As an indication, about 30% of congestion levels in the cities are caused by people looking for parking spaces. With full automaton parking in coordination with infrared sensors in parking bays, the location of empty spaces will be determined instantly, and cars being parked undamaged in tight locations.

As the technology gets cheaper, driverless cars will increasingly become a reality. The challenging problem at the moment is not about technology but about liability: who will be responsible if the car crashes or kills a pedestrian? The manufacturer, the computer software maker or the owner? And what happens if the car or car system is hiked? There ‘s also the problem of cyber warfare against autonomous vehicles that are linked to each other via the internet.
The insurance firms understand 90% of car accidents are caused by human error rather than mechanical faults, and insurers in the UK pay out £27 millions everyday in motor claims. It has already shown from data collected by Motor Insurance Repair Research Centre that adopting of autonomous emergency braking in conventional vehicles cuts collisions by 15% and injuries by 18%. The first fatal crash involving a driverless car but with a driver in it, using the Autopilot system, occurred in May 2016 when a driver was killed at the wheels of his Tesla car. All test drives has a driver in it, just in case the car is not behaving.

Technological advances even in conventional cars have also unbelievably progressed with ingenuity that was unthinkable when I bought my Lexus 10 years ago. Two days ago I was travelling in my friends BMW 6 car series (same as Mercedes E 250). The car engine stops itself at traffic lights or in traffic jams as your brake it and starts again when you release (to save energy). It has energy saving electromechanical power steering ie power steering only operates when the wheel is being turned. It parks itself at tight parallel parking bays after helping to find a parking space in the first place. It has night vision to help spot people, and large animals on the road in low light. It’s equipped with new brake energy regeneration system ie a mechanism which slows the vehicle converting its kinetic energy into a form which can be either used immediately or stored until needed. The BMW-7 series (2016) can park itself in a parking bay or into a garage and out, by pressing a button on the key fob, while the driver stands outside.
There are other novelties such as Vogswagon’s (Golf or Passat – cheaper cars) that has “City Emergency Braking system” that can reduce accident severity and even avoid a crash. When the speed is under 18mph, it uses its laser sensor to detect the risk of an impending collision and automatically primes the brake to make them more sensitive. If the driver does not brake and a collision is imminent the system applies brake automatically.

Google is a multinational company based at Stanford University, with its headquarters situated at Mountain View, California. It’s a research project, started in 1996 to organise the world’s information service that will be available universally. Google X is a part of its project to develop technology for mainly electric cars. Its team developed robotic vehicle in 2005.

Google announced plans to create a driverless car that had neither a steering wheel nor pedals in 2024 and unveiled a fully functioning prototype in December of that year that they planned to test on San Francisco Bay roads, beginning in 2015. Google plans to make these cars available to the public in 2020. Legislation to allow testing driverless cars with Google’s experimental driverless technology has been passed in four states and Washington DC.

Google had test driven a fleet of cars consisting solely of 23 Lexus SUVs by June 2015. The team had then driven 1,600,000 km (1,000,000 mi). During that period there had been 14 collisions, of which other drivers (of conventional cars) were at fault. However in 2016, there had been a crash due to the error of the software.

The project team at Google has also fitted a few cars with the self driving equipment, such as Toyota Prius, Audi TT, and Lexus RX450h. The equipment per car costs about $150.000. The car drives at the speed limit it has stored on its maps (it can drive only on routes that are available on its map), and maintains its distance from other vehicles using its distance sensors. So far, according to law, the system provides an overdrive that allows a human driver to take control of the car by stepping on the brake or turning the wheel just as in normal cars.

The self-driving car has eight sensors. The most important technology is the rotating roof-top mounted Lidar – a camera that uses an array of 32 or 64 lasers to measure the distance to objects to build a 3D map at a range of 200m, letting the car see hazards, such as the edges of roads and identify lane markings by bouncing pulses of light off the car’s surroundings. Video cameras detect traffic lights, read road signs and keep track of other vehicles nearby. They also look out for pedestrians and other obstacles.

The car can successfully identify a bike and understand that if the cyclist extends an arm they intend to make a manoeuvre. The car slows down and gives the bike enough space to operate safely. Bumper-mounted radar keeps track of vehicles in front and behind the car. Radar sensors (not new) dotted around the car monitor the position of vehicles nearby. The car has rear-mounted aerial that receives geolocation information from GPS satellites. An ultrasonic sensor on the outer rear wheel (depending on left or right hand drive) that monitors the car’s movements to detect position of the car relative to 3-D map.

Other technological companies are also competing with Google. Tesla (TSLA), an American motor company based in California that specialises in electric cars, plans to produce autonomous cars by 2017. Uber Technologies (San Francisco), an American world online and on demand transportation network company in more than 50 countries, has beaten Google by launching in September 2016, autonomous (though each has a driver behind the wheel to intervene in sticky spots) web-based ride car service in Pittsburgh, Pennsylvania.

Uber has been beaten up in driverless car service by the Singapore startup nuTonomy by launching the first driverless cars in August 2016. It’s easy for Singapore as it is a small city. Baidu Inc (BIDO), Beijing – the Chinese search engine website service company has recently announced a partnership with BMW, with a huge potential market as Chinese has 20% of world’s population.

The development that gives the car ability to change lanes is the most complex part. But already, Telsa electric car EV has Autopilot with the capacity to change lane at motorway speeds using the autonomous cruise control. All you have to do is turn on the indicator. It will automatically steer itself to keep its position in the middle of the lane. It will also slow you down if the car in front changes speed.

Automated car will give drivers more time for leisure or work. It’s estimated that within 10 years there will be a lot of driverless cars on the road and there will be less air pollution and less carbon footprints.

Author:  Irengbam Mohendra Singh

Source:  http://www.thesangaiexpress.com/google-driverless-cars-will-be-on-the-market-by-2020

Page 4 of 8

AOFIRS

World's leading professional association of Internet Research Specialists - We deliver Knowledge, Education, Training, and Certification in the field of Professional Online Research. The AOFIRS is considered a major contributor in improving Web Search Skills and recognizes Online Research work as a full-time occupation for those that use the Internet as their primary source of information.

Get Exclusive Research Tips in Your Inbox

Receive Great tips via email, enter your email to Subscribe.