by Paul Arnote (parnote)
Google Advances G+ Public Shutdown Four Months
After first announcing in October, 2018 that it was shuttering the public version of Google+ due to a bug that allowed third parties unintended access to otherwise private information on Google+ users, Google has decided to move up the planned shutdown of Google+ from August, 2019 to April, 2019.
At the root of the decision to accelerate the Google+ shutdown was the discovery of another bug in the Google+ API that could potentially compromise user information. The bug was discovered during a routine in-house testing of the API, and Google claims that no user information was compromised. They also claim that the bug was repaired within a week of being discovered, and they can find no instances of anyone exploiting the bug.
From the Google blog on December 10, 2018:
We've recently determined that some users were impacted by a software update introduced in November that contained a bug affecting a Google+ API. We discovered this bug as part of our standard and ongoing testing procedures and fixed it within a week of it being introduced. No third party compromised our systems, and we have no evidence that the app developers that inadvertently had this access for six days were aware of it or misused it in any way.
With the discovery of this new bug, we have decided to expedite the shut-down of all Google+ APIs; this will occur within the next 90 days. In addition, we have also decided to accelerate the sunsetting of consumer Google+ from August 2019 to April 2019. While we recognize there are implications for developers, we want to ensure the protection of our users.
Google estimates that approximately 52,500,000 (52.5 million) users were potentially impacted by the bug. The bug would potentially allow exposure of a user's email address, age, birthday, occupation, real name, etc., even if the user had set that information to be kept private.
Another Month, Another Facebook Scandal
In a December 19, 2018 article on Yahoo! Entertainment (excerpted from an article on The Wrap), Facebook admitted to granting access to users' private messages to major tech partners, such as Netflix, Spotify and Amazon. The New York Times originally broke the story on December 18, 2018. Facebook responded to the original NY Times article with a blog entry, on December 18, 2018.
Facebook claims that access was granted only after Facebook users "consented" to log into partner's apps with a Facebook user ID.
Yeah! Right!
Who takes the time to read the fine print when logging into a service, using another service's login information? Yes, we all should take the time, but in reality, WHO actually takes the time to read all that legal mumbo-jumbo? Especially when offered the convenience of logging in with the account of another service?! Reading that information, if you can even understand it and its twists and turns, would negate the convenience and time savings of logging in with an account you might already have elsewhere.
Fortunately, Spotify claims to have NEVER accessed Facebook users' private messages. Netflix ended their use of the program over three years ago, and like Spotify, claims to have never accessed Facebook users' private messages. Amazon, on the other hand, has admitted to using Facebook's program to its benefit.
From the Yahoo! Entertainment article:
Amazon, in a statement to TheWrap, said it used its access to better "sync" Facebook users with its products.
"Amazon uses APIs [application program interface] provided by Facebook in order to enable Facebook experiences for our products," an Amazon spokesperson told TheWrap in a statement. "For example, giving customers the option to sync Facebook contacts on an Amazon Tablet. We use information only in accordance with our privacy policy."
These regular reports of privacy violations makes me glad I followed my gut instinct and never got a Facebook account. I've never had one. With this continual parade and unending procession of privacy violations, it only strengthens my resolve to never, EVER have a Facebook account.
As a result, Facebook's stock prices on the NASDAQ Exchange fell from a high of $217.50 per share on July 25, 2018, to trading around $133 per share on December 28, 2018. That's almost a 40% drop in less than six months. It doesn't seem that Facebook is well positioned to survive too many more of these privacy scandals. But then again, a large percentage of Facebook users don't seem to be all that concerned about the privacy breaches, and continue on undeterred, as if nothing has ever happened.
New Website Removes Photo Backgrounds In Seconds, For FREE!
Anyone who has spent any time working on a computer has had the need to work with images. And, with cameras on virtually every cell phone made in the past 10 years, the ability to take photographs has blossomed to unforeseen heights.
Many, many of those photos taken have distracting backgrounds. Once a user digs deep enough into graphics editors, whether it's GIMP, Photoshop, or some other program, they will want to attempt to "clean up" those cluttered and distracting backgrounds. Anyone who has tried to remove a photo background will attest to how tedious of a process it is. In most cases, it takes minutes (if not hours) of painstaking work to satisfactorily remove a distracting background.
Well, thanks to an enterprising team of programmers out of Germany, you can remove the background of images in a matter of seconds, using an AI (artificial intelligence) algorithm. The new website is called remove.bg. The best way to illustrate its abilities is to provide an example. The image I chose to use as an example is a cell phone camera image of my two-year-old daughter, Lexi.
The original image is on the left, complete with a cluttered and distracting background. Within a matter of seconds of uploading the image to the website, the image in the center is displayed, with a download button beneath it. The image on the right is the downloaded image.
The resulting image is currently limited to a 500 x 500 pixel image, and the algorithm only works when there is a human face. The programming team hopes to enhance the algorithm to include objects, as well, like for product images. They are also hoping that they will be able to produce images at higher resolutions than 500 x 500 pixels.
Does it work? Mostly. In some instances, it's a bit too aggressive in removing the background. Notice how a good portion of my daughter's fine hair is missing on the left side of her face, comparing to the original image. In some instances, it's not aggressive enough. Notice the spout of the sippy cup on the table behind her is left intact. Some of the oak coffee table is left visible, too, between her arm and torso in the downloaded image.
Still, it does an admirable job. While I probably won't be able to replace the hair on the left side of her face that was inadvertently removed, cleaning up the areas where background was left remaining will take only very minor work.
If you're worried about privacy and the status of your uploaded original images, don't. Here's an excerpt from their about/fAQ page:
Your images are uploaded through a secure SSL/TLS-encrypted connection. We process them, and temporarily store the results so you can download them. After that (about an hour later) we delete your files. We do not share your images or use them for any other purpose than removing the background and letting you download the result.
Now, with the new image downloaded with a transparent background, I can place my daughter's image in other images. It will make it seem that she is somewhere that she never really was. Without a doubt, remove.bg takes the frustration and immense work out of removing backgrounds, speeding up your post processing work.
You can have all kinds of fun with this. Just be honest and tell your viewers that the image has been manipulated ... because, otherwise, they may not even realize it.
Debian's Anti-Harassment Team Is Removing A Package Over Its Name
On December 20, 2018, Michael Larabel, the founder of Phoronix, published an article that reported on a Debian package that was being removed from the Debian repository for violation if Debian's anti-harassment policies.
The package, Weboob (Web Outside of Browsers), is a Qt Python package that allows users to interact with a website without having to launch them in a browser. The original "complaint" was filed on August 14, 2018, in the Debian Bug Report logs.
As a result, the Debian Anti-Harassment Team made the recommendation to remove the "offensive" package from the Debian repository for violating the Debian Code of Conduct for needing to be respectful. The Debian project leader, Chris Lamb, has made the request for the package to be removed from the repository.
What's most interesting is reading the comments to the Phoronix article. As the posters there aptly and rightly point out, this is the start of treading a slippery slope. When will "manpages" be removed because it might offend someone? When will a program be removed because someone is offended by the programmer's politics or views?
This is what ALWAYS happens when social justice warriors target anything, for whatever reason. Sure, the name and anatomical references are childish. But is that reason for removal? Maybe Debian needs to remove the Linux kernel, too. This reddit thread from 2017 will illustrate the number of "offensive words" found in the Linux kernel.
What The EFF Learned About The Hemisphere Program After Suing The DEA
As the year draws to a close, so has EFF's long-running Freedom of Information Act lawsuit against the Drug Enforcement Agency about the mass phone surveillance program infamously known as "Hemisphere."
We won our case and freed up tons of records. (So did the Electronic Privacy Information Center.) The government, on the other hand, only succeeded in dragging out the fake secrecy.
In late 2013, right as the world was already reeling from the Snowden revelations, the New York Times revealed that the AT&T gives federal and local drug enforcement investigators access to a phone records surveillance system that dwarfs the NSA's. Through this program, code-named Hemisphere, police tap into trillions of of phone records going back decades.
It's been five long years of privacy scandals, and Hemisphere has faded somewhat from the headlines since it was first revealed. That was long enough for officials to rebrand the program "Data Analytical Services," making it even less likely to draw scrutiny or stick in the memory. Nevertheless Hemisphere remains a prime example of how private corporations and the government team up to help themselves to our digital lives, and the lengths they will go to to cover their tracks.
Here's a refresher on how it works:
Through the Hemisphere program, AT&T assists federal and local law enforcement in accessing and analyzing its massive database of call detail records (CDRs)--information on phone numbers dialed and received, as well as the time, date, and length of call and in some instances location information. More specifically, Hemisphere has access to telecommunication "switches" operated by AT&T that guide telephone calls. Because other providers use AT&T "switches" for their calls, the database contains call detail records regardless of carrier. The database has records concerning local, long distance, cellular, and international calls.
AT&T embeds employees with police agencies in at least three hubs: Los Angeles, Houston, and Atlanta. These employees run the software that searches and analyzes AT&T massive phone database. Cops (usually working drug cases) from around the country contact their regional hub to get the records, and federal officials can query Hemisphere without first getting permission from a judge. The system was reportedly especially useful for tracking people when they switched phones.
As shocking as this program was when it was first revealed, the secrecy surrounding it was just as outrageous. The media reported that law enforcement agencies are instructed to bend over backwards to keep Hemisphere from the light of day. That includes leaving the detail of its use out of arrest reports and court proceedings. Through a technique known as "parallel construction," after using Hemisphere, cops devise a second, more conventional way to obtain the information, and put that reverse-engineered method down on paper. The result is that law enforcement lies about the origins of its investigative methods, which also prevents individuals they target from knowing about, much less challenging, the surveillance.
With our lawsuit, EFF set out to shine light on this program, to free up documents, and to pry off redactions. Now we are releasing the final batch of more than 300 pages of records we freed up through our FOIA work. We have settled the case, with DEA reimbursing EFF $160,000 in legal fees. A judge has now dismissed the case, pursuant to the terms of a settlement. This was a hard-won victory by our legal team, Aaron Mackey, Adam Schwartz, and Jennifer Lynch, Senior Legal Secretary Madeleine Mulkern, and former EFF Staff Attorney Hanni Fakhoury.
You can read the "rest of the story" here.
|