A selection of 8 links for you. Use them wisely. I’m charlesarthur on Twitter. Observations and links welcome.
The startup team, which is based in Shanghai, sees it being used for things like telling you that you should take an umbrella, reminding you that you’re running late to an appointment, or for turning off all your smart lights at once. With a single press, it could alert your significant other that you’re leaving the house.
All that will depend on it working nicely with the brand of smart lights that you have, or syncing with the online calendar service that you use. The fact that the ELLA Assistant is subservient to your phone and other smart gadgets means it has to work with them all with ease, or it won’t gain favor with consumers. War tells Tech in Asia that the team will add support for various things as demand arises, but there are no specific supported devices or services listed yet – which is because the little gizmo hasn’t even launched. Once it’s out, it’ll have its own app store.
The ELLA Assistant will hit Kickstarter some time in August.
Hmm. Don’t think so, somehow.
Project Lightning will bring event-based curated content to the Twitter platform, complete with immersive and instant-load photos and videos and the ability to embed those experiences across the Web — and even in other apps.
“It’s a brand-new way to look at tweets,” says Kevin Weil, who runs product for the company. “This is a bold change, not evolutionary.”
It is also still a few months out, and things could change. But here’s how it will work.
On Twitter’s mobile app, there will be a new button in the center of the home row. Press it and you’ll be taken to a screen that will show various events taking place that people are tweeting about. These could be based on prescheduled events like Coachella, the Grammys, or the NBA Finals. But they might also focus on breaking news and ongoing events, like the Nepalese earthquake or Ferguson, Missouri. Essentially, if it’s an event that a lot of people are tweeting about, Twitter could create an experience around it.
This likely comes out of the machine-intelligence-curated tweet streams from a company that Twitter just bought – under Costolo’s leadership, don’t forget. He just took too long to do it. (By the way, in future could “top secret” – used in the headline – please be reserved for things that actually are top secret, such as the content of the Snowden documents, and not PR-led product demos by the CEO?)
Researchers rely on journals to keep up with the developments in their field. Most of the time, they access the journals online through subscriptions purchased by university libraries. But universities are having a hard time affording the soaring subscriptions, which are bundled so that universities effectively must pay for hundreds of journals they don’t want in order to get the ones they do.
Larivière says the cost of the University of Montreal’s journal subscriptions is now more than $7m a year – ultimately paid for by the taxpayers and students who fund most of the university’s budget. Unable to afford the annual increases, the university has started cutting subscriptions, angering researchers.
“The big problem is that libraries or institutions that produce knowledge don’t have the budget anymore to pay for [access to] what they produce,” Larivière said.
“They could have closed one library a year to continue to pay for the journals, but then in twenty-something years, we would have had no libraries anymore, and we would still be stuck with having to pay the annual increase in subscriptions.”
The kicker: the five largest academic publishers produce 53% of scientific papers in natural and medical sciences – up from 20% in 1973. Consolidation and monopoly.
EFF and eight other privacy organizations back out of NTIA face recognition multi-stakeholder process » Electronic Frontier Foundation
Despite the sensitivity of face recognition data, however, the federal government and state and local law enforcement agencies continue to build ever-larger face recognition databases. Last year the FBI rolled out its NGI biometric database with 14-million face images, and we learned through a Freedom of Information Act (FOIA) request that it plans to increase that number to 52-million images by this year. Communities such as San Diego, California are using mobile biometric readers to take pictures of people on the street or in their homes and immediately identify them and enroll them in face recognition databases. These databases are shared widely, and there are few, if any, meaningful limits on access.
EFF has been especially concerned about commercial use of face recognition because of the possibility that the data collected will be shared with law enforcement and the federal government. Several years ago, in response to a FOIA request, we learned the FBI’s standard warrant to social media companies like Facebook seeks copies of all images you upload, along with all images you’re tagged in. In the future, we may see the FBI seeking access to the underlying face recognition data instead.
Huh. The FBI does that, does it?
According to Graham Hann, the head of technology, media and communications at the law firm Taylor Wessing, the terms of the deal are broadly in line with industry standards – except the requirement to opt out.
“The content of the notice is not unusual, although it has deliberately been dumbed down, possibly for clarity,” he told the BBC.
“However, the optout approach is very unusual and I don’t see how the notice could form a binding contract without a positive reply.
“Apple clearly wants to launch with as much content as possible and has taken this risk-based approach. Some publishers may object and even threaten to sue.
“However, I think it would be hard to claim damage beyond a reasonable royalty fee.”
Soooo… it’s not actually a big deal?
while [Android TV] mostly got the dictation right, it often failed to produce the results I was looking for. Asking for Breaking Bad brought up detailed information about the show and its actors, but no way to watch it. This query also produced a link to Pomodoro Wear, a countdown timer app shaped like a tomato and designed for Google’s Android Wear smartwatch platform.
Even Google itself does not seem to know quite how to use Android TV. Its marketing materials suggest asking for “romantic comedies set in New York”. But when I tried that on the Android TV itself, it produced only a list of YouTube videos, the first of which was about Lego sets from a New York toy fair. With no When Harry Met Sally or Manhattan to be found, I can only wonder whether anyone else — including Google’s own staff — has ever searched for something to watch this way.
Bear in mind that Apple experimented with the same voice dictation system for TV and, by the account in the WSJ, abandoned it.
Rene Ritchie with a series of Q+As on the vulnerability disclosed yesterday:
Q: So were the App Stores or app review tricked into letting these malicious apps in?
A: The iOS App Store was not. Any app can register a URL scheme. There’s nothing unusual about that, and hence nothing to be “caught” by the App Store review.
For the App Stores in general, much of the review process relies on identifying known bad behavior. If any part of, or all of, the XARA exploits can be reliably detected through static analysis or manual inspection, it’s likely those checks will be added to the review processes to prevent the same exploits from getting through in the future
Apparently apps now have to state the URL schemes they will use in plaintext in a .plist file; that’s easy to review, and Apple can easily spot duplicates by static testing. Security researchers suggest Apple probably began adding such tests when it was told about the weakness – so this is perhaps already “fixed” in the simplest way it can be. (Checking plist files can be done retrospectively too.)
With Google announcing Google Now on Tap at Google I/O 2015 and Apple announcing Proactive at WWDC 2015, there is now a lot of discussion on how useful these predictive personal assistants will be. In particular, there is a lot of discussion on how much data these personal assistants will need to collect about you, and whether these assistants need to send this data to be analysed in the cloud.
The problem I have with these arguments is that they do not go into specifics. Instead of say “everything is going to be cool”, we should be having a detailed discussion of how each predictive recommendation is actually made, and whether each recommendation could be performed easily on your local device, or whether it needs to be done in the cloud.
I think Kagami’s question is really “What things need to be in the cloud for predictive analysis to work?” You could argue that traffic or transit news needs to be analysed in the cloud (a la Google) so it can warn you about delays; but on the other hand, an Apple device could pull that data from the cloud, and look at what’s in your device, and warn you too.
So the quest goes on.