Spotify’s Interface Insanity

How not to design multi-platform

I’m a huge fan of Spotify, and love their product — it’s profoundly changed the way that I consume music. I use their product across a wide range of platforms however, and I’m constantly baffled by the lack of interface consistency.

Every different platform (see the images below) uses a different menu layout, slightly different iconography and unique labels —all without making use of any platform-specific interface options (like gestures). Although I’d like to believe it has to do with the different use cases of the different platforms, I don’t think that has anything to do with it. Check it out and LMKWYT:

Spotify on the iPad — lovely start screen

Notice the layout of menu options. Playlists — which are the most important feature of the system — are buried near the bottom of the menu. And for good measure, when you first login, you get that nice, ugly, blank search screen (where the majority of the real estate isn’t devoted to search, but rather to telling us about how much of the world’s music is available). Also unique to the iPad version is a menu called “What’s New” that doesn’t exist in any other interface. At least settings is somewhere logical.

Now, take a look at the iPhone:

Spotify for iPhone: Now with Discover!

So in the iPhone version, we have a new feature called “Discover”, a new concept called “Me” (how existential), Search and Browse are together, Radio comes before Inbox and settings is not separate from the other menu options. Maddeningly, playlists are once again below search, discover and radio…as though Spotify on the iPhone isn’t about playing existing music, but rather about deep discovery.

Now, let’s look at the desktop (Mac) version’s menu:

Spotify on the Mac Desktop

Here, search is in a completely separate area, there’s a concept called follow, I can play a queue — and because it’s probably really important — one of the highest priority menu items is “(manage) devices”. For kicks in this interface, there’s a new concept of a “collection” that includes local files (even though local files are also available on every mobile device). Playlists here are conveniently shown at the bottom of the pane, “starred” is not a playlist (instead living in the collection area),and playlists are not ordered by any obvious/logical order.

Also unique to the desktop client interface, the mini player and play controls live in this left pane….not in the central area as they do on the apps.

Spotify Web Client

Answering all of my prayers, Spotify has also released a web client that runs in standard modern browsers — though it’s in beta. Although this app uses the same real estate as the downloadable client, they chose to follow the iPhone client interface…somewhat. Here, search and browse are separated again, playlists is where it “should” be (if you don’t actually care about usability) and all the social features appear to have been rolled into follow. Below this, settings is where I expect it, but the web client also appears to introduce a new set of options: my profile (the avatar), a music chat bubble and notification buttons. These options don’t exist in the top level nav of any of the other versions.

The web client is the cleanest and best-designed app, though I’m sure you’ll be as frustrated as I was when you discover that it’s not available directly by logging in at — you have to click a music link to actually launch it. I’m sure this feature will come in the future, but it’s just another example of the company’s current UI bipolar disorder.

Which Spotify is which? I have a lot of respect for the company, and their product is transformative. But as my quick tour of their apps illustrates, they are not approaching UI with a common vision — and I’d posit that they don’t really even understand how their users use the service (evidenced by how poorly Playlist nav is handled). A consistent experience from one platform to another would go a long way to raising usability and eliminating confusion.

And just for shits and giggles: if you are a Sonos user…have a gander at the Spotify navigation there. I know it’s under Sonos’ control, interestingly, it is the cleanest nav in my estimation. What do you think?

Spotify for Sonos — really consistent, huh?

Written by

Apple’s Smartwatch Is Starting To Look A Lot More Like A Fitness Tracker

Or maybe even a generalized health companion. Why else is it holding discussions with the FDA and hiring physiologists?

February 06, 2014 Mobile

If and when Apple ever comes out with a wearable computer, the term “watch” may not necessarily apply.

According to reports, Apple is going whole hog into health and fitness for its long-rumored wearable device. This morning, a new job listing on Apple’s website called for a “User Studies Exercise Physiologist” who would design, test and run user studies of fitness tracking, including calories burned, metabolic rate, cardiovascular fitness and “measurement/tracking and other key physiological measurements.” That job listing subsequently vanished, although Mark Gurman from 9to5Mac posted a screenshot.

Such physiologists and related sensor and health experts are expected to contribute to an app in the forthcoming version of iOS 8 called “Healthbook,” Gurman reported last week. Apple already has fitness tracking capabilities built into the iPhone 5S via its M7 motion co-processor, and Healthbook will be the software manifestation of Apple’s fitness hardware.

Apple has met with the Food and Drug Administration to discuss mobile medical applications, according to a report in the New York Times. That makes sense if Apple is working on applications that may touch on healthcare issues, as federal laws concerning the gathering, storage and sharing of health information are quite strict. Moving into a field like healthcare is not something a company like Apple would do haphazardly.

It does look like Apple may be acknowledging that smartwatches are not going to recapitulate the capabilities of smartphones any time soon. Today’s smartwatches are generally falling into one of two distinctive categories: communication devices like Pebble, Samsung’s Galaxy Gear and the Qualcomm Toq and fitness devices like FitBit and the Nike FuelBand. From the reports so far, it appears that Apple is focusing on the latter category.

U.S. healthcare is a $2.8 trillion industry, counting everything from insurance to pharmaceuticals to doctor and hospital fees and so on. Technologists have long considered health and fitness ripe for “disruption” by lower-cost, information-centric approaches, but most attempts so far have run into the shoals of federal regulations and heavily entrenched, bureaucratic incumbents that have proven amazingly resistant to change.

Apple’s Wearable Challenges

The technology for the all-on-one smartwatch is not really mature enough for Apple to create a device that is both smartphone and wearable fitness tracker. So, Apple has to establish priorities.

First, it needs to uphold the software and hardware design principles that have made it the most profitable computer company in the world. That may mean a curved display that demonstrates Apple’s usual flair. It will certainly mean packing a device with enough power to collect relevant data and deliver simple but intuitive functions. At a minimum, that means Bluetooth, GPS, a variant of the M7 motion processor (likely an ARM-based Cortex processor of some type) and an accelerometer.

Battery life will be crucial; almost certainly, so will wireless charging.

Second will be a platform for developers to build on, the same way they do on iOS in the smartphone and tablet world. Apple will likely have several launch partner apps for its fitness band, with Nike a likely suspect (Apple CEO Tim Cook sits on Nike’s board) as well as apps like RunKeeper (one of the first fitness trackers in the App Store). If Apple makes its fitness tracker a platform to build on top of—like it did with the iPhone—it could go a long way into creating interest for the product.

The third challenge may be the most difficult: getting the American health system to adopt an Apple-made fitness device as a go-to source for health monitoring. Imagine a doctor prescribing a health tracker to a patient to monitor cardiovascular health with all of that information directly shipped back to the doctor’s computer. This is the sort of idea that would lead Apple straight into discussions with the FDA.

Apple hasn’t rushed into the wearable market the way companies like Samsung and Sony have. Apple is taking a classic, pragmatic approach that could ultimately yield not just a consumer-grade fitness tracker, but a life companion device designed to keep you healthy.

Bitcoin is acting like a real currency now: it suffers shocks but doesn’t collapse

Feb. 7, 2014 – 12:54 PM PST


Summary:Bitcoin took a fall after some bad news this week — but nothing like its notorious crashes of the past. Meanwhile, other currencies, including the lira and the peso, fell just as much.

The latest news from the Bitcoin world involves more allegations of theft and skullduggery, but this time there’s a twist: investors aren’t sending the virtual currency into a Justin Bieber-style free fall. Instead, Bitcoin is behaving more like an emerging market currency in the face of bad news.

In case you missed it, the price of Bitcoin suffered a jolt on Thursday night after a popular exchange stopped processing transactions. This comes on the heels of a string of other negative news: Apple booted the last Bitcoin app from its iTunes store; another money laundering arrest; and Russia’s decision to ban the currency outright. Oh, and Coinbase — the most mainstream Bitcoin service — confirmed that someone is robbing its customers’ wallets.

So what the outcome of all this? After trading at around $800 for over a month, the currency finally took a small swoon last night and, as of Friday afternoon ET, one bitcoin is selling for about $740 on Coinbase:

bc screenshot

The overall drop is over 10 percent in 24 hours, a significant amount to be sure. But the drop is nothing compared to Bitcoin’s usual wild swings. As the Washington Post noted this week, the currency has gone through at least four spectacular bubbles and crashes, in which its value plummeted more than 80* percent — but this isn’t happening anymore. Indeed, this week’s drop is just a hiccup in comparison.

Meanwhile, Bitcoin wasn’t only currency to stumble this month. Here’s what happened to the peso in Argentine, where President Cristina Kirchner has badly mismanaged the economy:

Screen Shot 2014-02-07 at 3.08.38 PM

And, if you want to discuss volatility, here’s what the Turkish lira has been doing in the last few weeks of a corruption scandal:


These charts show that while Bitcoin has troubles, so too do conventional fiat currencies that have been around for much longer. And while hackers and charlatans can damage Bitcoin’s reputation, they may be no worse for the money than corrupt and incompetent leaders who meddle with central banks for political purposes.

It’s too soon to say, of course, that Bitcoin has overcome its historical volatility — or even that it won’t blow away all together. But for now, it’s significant that the currency’s value is holding relatively stable in the face of a wave of bad news.

*An earlier version of this story stated that previous Bitcoin crashes amounted to 100 to 500 percent. This has been corrected as per comments below.

Google posts large privacy violation notice on French homepage

Feb. 7, 2014 – 3:42 PM PST


Screen Shot Google privacy fine
Summary:Google lost an emergency appeal, meaning it is posting a notice in the middle of its French homepage for a period of 48 hours that tells users it violated their privacy.

A French court on Friday refused Google’s last-minute plea to suspend an order imposed by a privacy watchdog, meaning the search giant has to post a notice on its homepage for a period of 48 hours informing users that the company was fined €150,000 ($204,000) for violating data collection laws. Google complied with the order on as of Saturday morning, as seen below:

Screen Shot Google privacy fine

The Friday court decision came in response to Google’s emergency appeal this week to the Conseil d’Etat, France’s highest appeals court for administrative law. The company argued that the penalty — which specified that the notice had to be printed in 13-point Arial font and appear in the center the screen below the search box — was too severe and that Google’s reputation would be irreparably damaged.

In a decision and related press release issued on Friday, the French Court explained that Google had failed to show the order would cause permanent damage to its financial interests or its reputation. It added that Google had failed to show that that the privacy agency’s order was illegal, or that the public interest would be harmed by going forward with the order.

As a result, Google had to post the full paragraph set out in the agency’s original order, which informs consumers about the fine and requires a link to the decision on the privacy agency’s website.

Google can continue to appeal the underlying fine, which was imposed for the company’s failure to respect data protection and consent rules, but could not avoid the order to post the notice on within seven days.

The €150,000 fine, which is the maximum the agency could impose, is meaningless to a company of Google’s size, but the search giant appears anxious to prevent governments telling it what to post on its homepage. Google did not immediately return an after-hours request for comment, but the Wall Street Journal reported earlier this week that the company told the court that it always “maintained that [homepage] page in a virgin state.”

This is not the first time that a European court has required an American company to post a notice on its homepage; last year, a judge ordered Apple to post a notice on its website that Samsung did not violate a design patent for its iPad. And, as MarketingLand notes, the Belgians imposed an even more draconian order on Google in 2006.

Here are the relevant parts of today’s order. I’m posting the original French with a Google Translate version below.

Here’s a portion of today’s ruling summarizing the contents of the original order:

[The agency] a décidé de prononcer à l’encontre de cette société une sanction pécuniaire de 150 000 euros, de rendre cette décision publique sur le site de la CNIL et d’ordonner à la société de publier à sa charge, sur son site internet, pendant une durée de 48 heures consécutives, le septième jour suivant la notification de sa décision, selon des modalités définies par celle-ci, le texte du communiqué suivant : « la formation restreinte de la Commission nationale de l’informatique et des libertés a condamné la société Google à 150 000 euros d’amende pour manquements aux règles de protection des données personnelles consacrées par la loi « informatique et libertés ».

[the agency] decided to vote against this company a fine of 150,000 euros, to make this decision public on the website of the CNIL and order the company to publish its charge on its website . fr, for a period of 48 consecutive hours, the seventh day following the notification of the decision, according to procedures laid down by the latter, the text of the following statement: “the limited formation of the National Commission on Informatics and freedoms condemned the Google company 150,000 euros fine for breaches of data protection enshrined in the “Informatique et Libertés” law

Here’s the part where the court says the notice won’t permanently harm Google’s reputation or the public interest:

par ailleurs, que la société ne saurait soutenir et n’allègue d’ailleurs pas qu’une atteinte grave et immédiate pourrait être portée, par la sanction dont la suspension est demandée, à la poursuite même de son activité ou à ses intérêts financiers et patrimoniaux ou encore à un intérêt public

Moreover, the company can not and will not support also alleges a serious and immediate harm could be increased by the sanction which the suspension is requested, the same pursuit of its business or its financial interests and property or in the public interest

This story was updated at 10:45ET on Saturday to reflect that Google is complying with the order.

What’s the best way to fund the internet of things?

By Alicia Asin, Libelium
1 day ago



Summary:The IoT community is still debating which of three different funding models will best support development, with some favored by Europe and some by the US. Understanding these models is crucial to understanding where the technology is heading and what region will lead the way.

When it comes to smart cities and the internet of things, everyone asks, “Where is the money?” I have observed very dissimilar points of view on financing for the IoT in keynote topics in conferences and in discussions throughout the year, in particular at the recent Internet of Things Forum in Cambridge in the U.K., and the M2M & Internet of Things Global Summit in Washington D.C. It struck me that the ideas were as far apart as the venues themselves. It’s important to understand these different funding models, because they are driving the development of the IoT.

There is no easy answer to the funding question because the IoT market is still very fragmented. From our perspective of sensors and hardware, we see small pieces of revenue coming from many different verticals. I think of these as trial balloons, just validating the huge potential of the IoT and its power to be the next technology revolution. Even so, we see smart agriculture and smart cities as the verticals with the most traction right now. Differences in these two sectors shed light on the key question of funding the IoT. Will it be public or private?

The three primary funding models

Smart agriculture is privately funded in many cases, and the return on investment has to be obvious from the start. Smart cities have so many more stakeholders and the approach is not so clear-cut. There are many different ways to support their development, some coming from academia, communities, and industry.

1. Public money. In my view, had it not been for the economic crisis, public funding would have been the normal route. Right now European Union funds play an important role in allowing a number of connected smart cities pilots to really test the technology and accelerate the uptake of services. The existence of these European Commission funds makes the difference between what we see happening in Europe vs. the US market, allowing Europe to lead the way.

At Cambridge people thought of Europe as ahead of the U.S. in the IoT, whereas in Washington there were fears that a “go-with-grants” model is harmful because it is an unsustainable business model. The U.S. wants to see how Europe will maintain smart cities projects over time, and several critics point to the lack of a business model in flagship smart cities projects funded with EU funds. It’s true that these projects are usually led by academia, and business sustainability is not usually the focus. But don’t forget: the very first step is validating the technology.

2. Public/private partnership (PPP). In PPP, private companies invest and go on a cost-savings-share model with municipalities. It is a viable funding mechanism for smart cities, and in fact, the US has a history of finding capital for transportation and infrastructure projects this way. PPPs can create new forms of cooperation and resource sharing.

In this model, who would be the perfect private partner? Here, we stumble into a paradox in the nascent IoT market. Due to the similarity between IoT networks and telephony networks, operators should be the logical owners of IoT infrastructure as a new connectivity channel. However, the system integrators are the ones leading the way. This is because an operator needs to cover a whole country, or at least a circuit including major cities, and that requires a lot of investment. On the other hand, integrators can jump from project to project, testing the hottest verticals. But keep your eye on the bouncing ball, because this situation is evolving. Today operators are letting the integrators pay for their education.

3. Citizen participation. Community-led projects that apply the current trend of crowdfunding through platforms like Kickstarter are gaining momentum. I know of a number of civic projects that are spearheaded by citizen activists, such as AirQualityEgg, a device that measures air quality, or SafeCast’s network of individual airborne radiation sensors in Fukushima (Libelium was a partner in this project).

SafeCast's crowdsourced map of airborne radiation in Japan.

SafeCast’s crowdsourced map of airborne radiation in Japan.

This is such an interesting model, and I wonder if governments can incentivize citizens to acquire the sensors and build the systems themselves, perhaps by offering tax breaks or other benefits.

Investing the funds: hardware or services?

Once the money is raised, how will we spend it? In Cambridge, the prevailing view was that IoT money should be devoted to infrastructure. In Washington D.C. people were not so sure, because they believe in a model where services generate more money than hardware.

For the sake of argument, I like to compare the IoT to the railway age. There are many parallels, not only because both of these inventions are industrial revolutions with the ability to change everything. For a moment, try to imagine railway and train builders pitching to raise money. Would venture capitalists just tell them “Nah! We prefer to invest in companies that will be handling the ticketing system…?” Of course not! No services are possible, nor is any other type of future business, if we do not have the infrastructure in place.

Someday, it is true, hardware will be commoditized, and revenues will come from services associated to data, but if we are in the midst of raising a new market, that day is still really far away.

Alicia Asin is the co-founder and CEO of Libelium, a provider of open hardware for wireless sensor networks used in Smart Cities and Internet of Things projects.

Apple buys back $14B worth of stock — CEO says ‘great stuff’ coming from new product categories

Apple’s stock repurchasing plan is moving ahead at full steam.

CEO Tim Cook said today that the company has bought back $14 billion worth of its own stock over the past two weeks, following an earnings report that left Wall Street cold, reports the Wall Street Journal.

Apple previously said that it plans to buy back $60 billion of its own shares over the next few years. Cook noted that, together with this latest deal, Apple has repurchased more than $40 billion worth of shares in the past year.

In its first quarter earnings report last week, Apple announced record revenues, but it sold fewer iPhones than analysts estimated. The company also projected that earnings for this quarter may decline from last year’s numbers. Apple’s stock fell around 8 percent following the report, which could explain why it was so quick to buy back more shares.

Cook reiterated that the company is exploring “new product categories,” which he describes as “really great stuff.” That’s not really a surprising response from Cook — critics are harping on the notion that Apple is no longer innovating with new devices. But on top of recent stories pointing to Apple’s new focus on health tech, his comments hint that the heavily rumored iWatch is in the works.

VentureBeat is providing our Marketing Automation Study to readers who fill out our survey. Share your experience, and you’ll get our full report when it’s published. Also: speak with the analyst who put this report together.

Snowden used cheap tools to outwit the NSA

February 9, 2014 8:00 AM

The classified document leaker Edward Snowden broke into the National Security Agency’s trove of spy data using cheap tools, according to a report in the New York Times.

Snowden used web crawler software to “scrape” data from the NSA’s computer networks, according to an unidentified senior intelligence official interviewed by the newspaper.

“We do not believe this was an individual sitting at a machine and downloading this much material in sequence,” the official said. The process, he added, was “quite automated.”

That’s surprising because the NSA is charged with protecting the country’s military intelligence from cyber attacks. Snowden’s attack wasn’t that sophisticated. And it was hardly the first, as it came three years after the WikiLeaks disclosures showed the vulnerability of government networks.

Snowden was working on the inside as a tech contractor. He’s now in exile in Russia, and his disclosures about how the agency spied on Americans have been highly damaging to the intelligence agency and the credibility of the federal government.

Snowden accessed 1.7 million files, and officials even questioned him about it while his work was in progress. Snowden reportedly responded that he was a systems administrator and was just conducting routine maintenance work. He found that while the agency had strong protection against outside attacks, it had rudimentary protections against inside attacks. The NSA declined to comment to the New York Times.

VentureBeat is providing our Marketing Automation Study to readers who fill out our survey. Share your experience, and you’ll get our full report when it’s published. Also: speak with the analyst who put this report together.

The real promise of big data: It’s changing the whole way humans will solve problems

Current “big data” and “API-ification” trends can trace their roots to a definition Kant first coined in the 18th century. In his Critique of Pure Reason, Kant drew a dichotomy between analytic and synthetic truths.

An analytic truth was one that could be derived from a logical argument, given an underlying model or axiomatization of the objects the statement referred to. Given the rules of arithmetic we can say “2+2=4” without putting two of something next to two of something else and counting a total of four.

A synthetic truth, on the other hand, was a statement whose correctness could not be determined without access to empirical evidence or external data. Without empirical data, I can’t reason that adding five inbound links to my webpage will increase the number of unique visitors 32%.

In this vein, the rise of big data and the proliferation of programmatic interfaces to new fields and industries have shifted the manner in which we solve problems. Fundamentally, we’ve gone from creating novel analytic models and deducing new findings, to creating the infrastructure and capabilities to solve the same problems through synthetic means.

Until recently, we used analytical reasoning to drive scientific and technological advancements. Our emphasis was either 1) to create new axioms and models, or 2) to use pre-existing models to derive new statements and outcomes.

In mathematics, our greatest achievements were made when mathematicians had “aha!” moments that led to new axioms or new proofs derived from preexisting rules. In physics we focused on finding new laws, from which we derived new knowledge and knowhow. In computational sciences, we developed new models for computation from which we were able to derive new statements about the very nature of what was computable.

The relatively recent development of computer systems and networks has induced a shift from analytic to synthetic innovation.

For instance, how we seek to understand the “physics” of the web is very different from how we seek to understand the physics of quarks or strings. In web ranking, scientists don’t attempt to discover axioms on the connectivity of links and pages from which to then derive theorems for better search. Rather, they take a synthetic approach, collecting and synthesizing previous click streams and link data to predict what future users will want to see.

Likewise at Amazon, there are no “Laws of e-commerce” governing who buys what and how consumers act. Instead, we remove ourselves from the burden of fundamentally unearthing and understanding a structure (or even positing the existence of such a structure) and use data from previous events to optimize for future events.

Google and Amazon serve as early examples of the shift from analytic to synthetic problem solving because their products exist on top of data that exists in a digital medium. Everything from the creation of data, to the storage of data, and finally to the interfaces scientists use to interact with data are digitized and automated.

Early pioneers in data sciences and infrastructure developed high throughput and low latency architectures to distance themselves from hard-to-time “step function” driven analytic insights and instead produce gradual, but predictable synthetic innovation and insight.

Before we can apply synthetic methodologies to new fields, two infrastructural steps must occur:

1) the underlying data must exist in digital form and

2) the stack from the data to the scientist and back to the data must be automated.

That is, we must automate both the input and output processes.

Concerning the first, we’re currently seeing an aggressive pursuit of digitizing new datasets. An Innovation Endeavors’ company, Estimote, exemplifies this trend. Using Bluetooth 4.0, Estimote is now collecting user specific physical data in well-defined microenvironments. Applying this to commerce, they’re building Amazon-esque data for brick and mortar retailers.

Tangibly, we’re not far from a day when our smartphones automatically direct us, in store, to items we previously viewed online.

Similarly, every team in the NBA has adopted SportsVU cameras to track the location of each player (and the ball) microsecond by microsecond. With this we’re already seeing the collapse of previous analytic models. A friend, Muthu Alapagan, recently received press coverage when he questioned and deconstructed our assumption in positing five different position-types. What data did we have to back up our assumption that basketball was inherently structured with five player types? Where did these assumptions come from? How correct were they? Similarly, the Houston Rockets have put traditional ball control ideology to rest in successfully launching record numbers of three-point attempts.

Finally, in economics, we’re no longer relying on flawed traditional microeconomic axioms to deduce macroeconomic theories and predictions. Instead we’re seeing econometrics play an every increasing role in the practice and study of economics.

Tangentially, the recent surge in digital currencies can be seen as a corollary to this trend. In effect, Bitcoin might represent the early innings of an entirely digitized financial system where the base financial nuggets that we interact with exist fundamentally in digital form.

We’re seeing great emphasis not only in collecting new data, but also in storing and automating the actionability of this data. In the Valley we joke about how the term “big data” is loosely thrown around. It may make more sense to view “big data” not in terms of data size or database type, but rather as a necessary infrastructural evolution as we shift from analytic to synthetic problem solving.

Big data isn’t meaningful alone; rather it’s a byproduct and a means to an end as we change how we solve problems.

The re-emergence of BioTech, or BioTech 2.0, is a great example of innovation in automating procedures on top of newly procured datasets. Companies like Transcriptic are making robotic fully automated wet labs while TeselaGen and Genome Compiler are providing CAD and CAM tools for biologists. We aren’t far from a day when biologists are fully removed from pipettes and traditional lab work. The next generation of biologists may well use programmatic interfaces and abstracted models as computational biology envelopes the entirety of biology  —  driving what has traditionally been an analytic truth seeking expedition to a high throughput low latency synthetic data science.

Fundamentally, we’re seeing a shift in how we approach problems. By removing ourselves from the intellectual and perhaps philosophical burden of positing structures and axioms, we no longer rely on step function driven analytical insights. Rather, we’re seeing widespread infrastructural adoption to accelerate the adoption of synthetic problem solving.

Traditionally these techniques were constrained to sub-domains of computer science – artificial intelligence and information retrieval come to mind as tangible examples – but as we digitize new data sets and build necessary automation on top of them, we can employ synthetic applications in entirely new fields.

Marc Andreessen famously argued, “Software is eating the world” in his 2011 essay. However, as we dig deeper and understand better the nature of software, APIs, and big data, it’s not software alone, but software combined with digital data sets and automated input and output mechanisms that will eat the world as data science, automation, and software join forces in transforming our problem solving capabilities – from analytic to synthetic.

Zavain DarZavain Dar is a Venture Capitalist at Eric Schmidt’s Innovation Endeavors. Prior to joining Innovation Endeavors he was an early employee at Discovery Engine, a next generation keyword search engine acquired by Twitter. There he engineered Machine Learning and Data Science algorithms across a proprietary distributed systems framework to build web scale Ranking Algorithms. He was also a founder of Fountainhop, one of the first hyper-local social networks. There he worked both on developing the product and overseeing the launch at colleges nationwide in 2010. Send him your thoughts @zavaindar.

What is left when you log out of Facebook?

How Facebook transitioned to being more of a platform than a product

Facebook has become a great platform, and maybe losing some of its mojo as a product. I was spending so much time on the website and iPhone app that I decided to logout, as a short experiment.

To be clear, I am still “on” Facebook, I am just not logged in anymore on or the mobile application. But iOS7 and OSX still have my OAuth credentials, and that way I am still plugged in to Facebook’s platform, just not the product anymore.

The two aspects I was the most afraid of loosing were the events and messenger. Facebook really succeeded in making my personal day-to-day social life revolve around their Event feature so that my friends who stayed out of Facebook stopped being invited to parties (crazy, I know! but that’s the subject for another post). As for messenger, some people seem to think Facebook is the best way to reach me, and I can’t have vacation auto-responder, email forward or anything to stay out really.

But in both cases, and many more, there’s an app for it. I kept the Facebook messenger iPhone app, and thanks to Sunrise calendar app, all of my events are still in my phone. I can also still use many apps with Single Sign-On, so I am not sure what I am missing.

I have started using Twitter a little more, reading more blogs, and news that I used to, and I even found time to think and write a few blog posts (this one being the first I’ll publish). Important to note that I have tried many applications lately, and often connected them to Facebook too, so I am not just leaving them, or switching to Twitter.

Really, I feel Facebook has built an awesome platform, and I am not sure I’d really need to log back in to the feed or profile (I certainly will). But this has been many years in the making, and with their new approach to build an ecosystem of applications, they might just be, once again, one-step ahead of the curve!

Further Reading

Social products win with utility, not invites (Guest Post)

 — Note from Andrew: I’ve recently traded a series of interesting emails on the evolution of social products and how the things that worked …

Written by

bRight Switch Wants To Upgrade The Light Switches In Your Home To Android Touchscreens

Next Story

Google’s Android OS is the dominant mobile platform by market share, but it’s also increasingly pushing beyond portables and onto a range of other devices types — including, if this crowdfunding campaign delivers on its promises, the boring old wall switches in your home.

bRight Switch is a prototype project that’s within touching distance of its $115,000 Indiegogo crowdfunding goal (with less than a day of its campaign left). Its aim is to replace plain old light switch hardware with what’s basically a small tablet fixed to the wall, expanding the functionality of the switch interface beyond simply just switching your lights on and off.

The bRight Switch actually plugs into a base unit to convert a wall switch from dumb switch to smart screen, but its makers claim the installation process is an easy job for an electrician.

bright switchThe bRight Switch tablet design is customised for a wall-mounted context to offer features that make sense in such a setting, such as people detection to automatically turn on lights on when someone walks into a room.

Other features the smart switch is set to support include the ability to remotely switch your lights on and off via the Internet and a learning mode that gets to know your routines over time and automatically switches lights on and off based on prior usage.

Also on board is a security feature whereby you can play back footage recorded by the camera on one of the switches in another room. Plus videocalling (via Skype, or similar) and streaming music via Internet radio services such as Pandora.

Other features include a built-in alarm; temperature display; dimmer ability for certain types of bulbs; an intercom feature allowing for chatting between bRight Switches located in different rooms; plus other security features such as setting an alarm to be triggered by motion in a particular room.

The units will also run standard Android apps, so you could presumably fire up Angry Birds on your wall if you’re really bored. bRight Switch’s makers are also planning to supply an open API to encourage developers to create new apps for the wall beyond what they’ve envisaged.

Of course, all these features are aspirations at this point with only a prototype of the bRight Switch in existence. If the device hits its funding target, which at the time of writing is looking pretty likely, its U.S. based makers reckon they can deliver to backers by July.

The switches use Wi-Fi to plug into your home router to support functions such as Skype calling and streaming Internet radio, while the Z-wave wireless protocol is used for talking to lights around your home that are not wired directly to the switch.

How much will this smart light switch set you back? They’re charging $75 per switch for non-Bluetooth switches, and $90 for the Bluetooth version. Or $325/$435 for a five-pack of the two respective options.

What’s the point of the Bluetooth addition? Added functionality such as the ability to link up to external Bluetooth speakers for “full spectrum sound” — or, getting even more customised about home automation, the ability to track your phone (and therefore you) around the house, providing a “custom personalized experience as you move from room to room.”