Consumers are seeing more and more ads popping up on their computers and mobile devices.  Sometimes they’ve requested this kind of information but other times they don’t know how or why they’re getting these unsolicited ads.

One way these ads happen is because of unique identifiers in web traffic sent by phones and other mobile devices.  This information can be misused by ad networks to track consumers online activities.  The really bad news is that consumers can’t turn off these types of unique identifiers.  What does this mean? It means ad networks can be tracking consumers regardless of whether consumers have tried to protect their privacy via their privacy browsing settings or being on “Do Not Track” lists.

The good news is that AT&T had been experimenting with these types of unique identifiers but has discontinued doing so. Robert McMillan recently reported that news (, “AT&T Stops Using Invasive ‘Perma-Cookies,’ But It May Turn Them Back On”; November 14).

The bad news is that Mr. McMillan reports that Verizon is still using these unique identifiers.  It would be welcome news if Verizon decided to stop using unique identifiers but consumers should be optimistic about that happening.

Consumers have become accustomed to seeing different icons and seals on company websites.  One of the most reassuring for years has been the TRUSTe seal.  Why?  Because companies displaying that seal did so after having their privacy practices verified according to the TRUSTe requirements about transparency and other requirements.  The latter include the company’s assertions about the options consumers will have about how their personal information will be collected and used.

Now consumers are learning that TRUSTe’s assertions about its own practices have been lacking for years.  TRUSTe has just entered into a settlement with the Federal Trade Commission (FTC).  The FTC had filed a complaint against TRUSTe because of two of its practices that were alleged to be false, misleading and, therefore, deceptive to consumers.

What were these practices? As Lesley Fair wrote in an FTC blog, TRUSTe claimed that companies wanting to display its “Certified Privacy Seal” underwent recertification reviews to reconfirm their privacy practices.  Plus, TRUSTe claimed that it was an independent non-profit, thus making its certifications even more objective (; “The FTCs TRUSTe case: when seals help seal the deal”; November 17th).

Neither was true.  As Ms. Fair writes, the FTC found that TRUSTe hadn’t done recertifications of over 1,000 incidences between 2006 and 2013.  Moreover, TRUSTe became a for profit company in 2008 yet continued carrying the misrepresentation that it was a non-profit entity on recertified websites.

This is sobering news for consumers who often don’t have the time and/or means to undertake their own verifications of a website’s privacy practices.  So can consumers continue trusting the TRUSTe seal and/or other similar seals?  Maybe, but with much more caution and with less absolute trust.


It was bad enough learning in September that 56 million credit cards were impacted by hackers who got into Home Depot’s payment systems.  Now we’re learning that there were also 53 million addresses that were also stolen during that hack.  Lee Munson reports that troubling news in a recent article (“53 million email addresses stolen in Home Depot breach”;; November 7th).

That same article reports the only good news about this breach, i.e. that Home Depot’s investigation found that no passwords, personal information or additional payment card information had been stolen and/or compromised by the hackers.  That’s small comfort but hardly reassuring given this latest discovery.

Home Depot is also warning customers to be alert for any phishing scams that could happen using the stolen email addresses.

I previously wrote that Home Depot, among other retailers, is now going to be installing the chip-and-PIN technology in all of its stores.

These privacy and security improvements cannot happen fast enough for Home Depot customers.


The Federal Government is making an important enhancement to protect the credit cards used by thousands of Federal employees. On October 18th, President Obama signed an Executive Order that requires that microchips and PIN numbers be added to the Federal Government’s credit and debit cards.  These security measures be will added beginning in January 2015.

In his Executive Order, the President also announced that several major companies are also taking steps to enhance the security of their systems and provide more customer protections.  The companies doing so include Home Depot, Walgreens, Wal-Mart and Target.  These companies will begin using chip and PIN-compatible card terminals in their stores — many of them by January 2015.

The “chip and PIN” protections are already used widely for credit cards used in Europe.  While nothing is completely 100% “hacker proof” these types of protections add important levels of security to credit and debit cards.  Let’s hope that more companies start installing chip and PIN-comptabile card terminals and that these security protections begin to be used more widely in the United States.


Just last week I wrote about medical information being more sought by hackers than even credit card information.  Now there’s even more information supporting the need for consumers to be even more vigilant in using wearable fitness applications.

Why? Because it turns out that some wearable fitness apps do not have the kind of privacy and security features needed to protect the wearer’s personal information.  Mathew J. Schwartz reported on a recent study by Candid Wueest, a Symantec security researcher (; “Do Wearable Devices Spill Secrets? Sizing up the Privacy Risks of Fitness-Tracking Apps”; October 17, 2014).

Mr. Wueest studied the top 100 most popular fitness-tracking applications on both the Apple Store and Google Play.  He found that the information transmitted by the applications often included the wearer’s name, email address, password, date of birth and target weight as well as their Facebook and Google access tokens.

What else did Mr. Wueest unearth? As Mr. Schwartz wrote that Mr. Wuuest’s research and analysis found that:

  • 52% of the applications offered no privacy policy;
  • Each application shares personal data with — on average — 5 sits including application-related analytics sites, advertising networks, social media sites and marketing networks;
  • 20% of the applications Mr. Wueest studied were transmitting login credentials in clear text; this means that the information could be intercepted by anyone connected to the same public WiFi hotspot, for example, as one of the devices or if someone planted a Bluetooth “sniffer” within range of one of the devices;
  • Some of the applications encrypted the credentials, but failed to encrypt the personal data being transmitted; and
  • That many of the application makers and device manufacturers failed to secure the personal information being stored on their sites.

Again, I’m not suggesting that consumers abandon fitness regimes and fitness-tracking devices.  What I am emphasizing is the critical need to try and learn as much about the data security and privacy practices of the manufacturers of wearable devices as possible.


More and more medical and health information is being collected electronically.  People are using fitness apps, for example, to watch their calorie intake and gauge their level of physical activity.  Those are certainly good and admirable goals.

But what many people don’t realize is the serious security issues about using these kinds of medical and health apps.  Health data is increasingly held by technology companies, not by health and medical professionals and entities such as doctors and hospitals as Brian Fung noted in a recent Washington Post article.  He gave the example of Apple’s Healthkit that collects and centralizes health information across apps (; “Facebook may be eyeing your health data. Should you trust it?”; October 3rd).  Moreover, these apps don’t have the same privacy and security requirements that apply under the Health Insurance Portability and Accountability Act‘s to medical and health professionals, hospitals and associated records do not cover these types of fitness and health apps.

What is particularly powerful are two statistics in Mr. Fung’s article.  First, he writes that ‘[t]here’s also a lot of money floating around the healthcare industry —an estimated $3 trillion worth ….”.  Second, he noted that “[h]ealth records are so valuable, security experts say, that hackers will pay up to 20 times more for a person’s medical record on the black market than for a stolen credit card number.”

Those are staggering numbers and only underscore that individuals need to protect their health and medical information.  Moreover, individuals need to think carefully before using a fitness or medical or health app.  I’m not suggesting that individuals shouldn’t use them.  But individuals need to dig in and learn how the information that’s being collected will be stored; how it will be protected; and whether the company whose product they’re using is going to sell or share any of the collected data with a third party.

Individuals need to be pro-active so they don’t unwittingly help hackers make money off of some of their most sensitive personal information.

National Cybersecurity Awareness Month kicks off today.  Consumers will be able to read numerous articles and announcements providing helpful guidance and information.

I want to start with “tip #1″ that is practical and useful throughout the year.  Protecting one’s privacy starts with the basics and that means using a strong password.  It often seems more complicated than it needs to be.  I’ve found an article today by Paul Ducklin with an accompanying video that helps explain what a proper and strong password is; the steps for creating one; and reminders about why doing so is very important.

You can find Mr. Ducklin’s article and the video at:  His article is titled “How to pick a proper password [VIDEO]” and is well worth the time.

October is the month during which cybersecurity is highlighted but these practices are ones consumers need to follow year round.



Facebook users are about to see even more ads being targeted for them.  Sometime during the week of September 29th, Facebook will launch its new Atlas advertising platform which is the re-engineered version of the current Atlas Advertiser Suite.

What will the new Atlas do and what does this mean for Facebook users?  Marketers will be be able to do better and easier targeting of ads to users and will be able to measure the ads across the web.  As described by Jack Marshall in the Wall Street Journal (; “Facebook Extends Reach With New Advertising Platform”; September 22nd) and Charlie Osborne for Between the Lines (; “Facebook to unveil new ad platform to track users across multiple devices”; September 23rd), marketers will be able to see which ads Facebook users look at, the ads with which they use to interact and which ads influence the purchases Facebook users make.

Not only will this mean more precisely targeted ads.  As the above authors report, Facebook hopes that Atlas will allow linking between ad interactions with Facebook accounts so that users can be tracked “anonymously” across and between multiple devices.  The example used by Ms. Osborne is that a marketer could use Atlas to track a Facebook user and see if she makes a purchase using her PC after seeing an ad on her smartphone.

People have different attitudes towards being tracked and receiving targeted ads.  It will be important for Facebook users to keep posted on how Facebook informs users about Atlas.  Some Facebook users might want to recheck their current Facebook privacy settings if they want to limit the types and volume of ads that are, or will be, popping up.

On September 9th, Timothy D. Cook, Apple’s chief executive, unveiled two new Apple products — products that means Apple could become a collector of consumers’ personal health and financial information.  And that means Apple will need to protect the personal information it holds — and that’s where the privacy concerns arise.

As Brian X. Chen and Steve Lohr reported in the New York Times, these products include a health-montioring smartwatch and Apple Pay; the latter is a new payment service consumers will be able to use to buy items wirelessly on some Apple devices (; “With Apple Pay and Smartwatch, a Privacy Challenge”; online September 11th).

The new Apple smart watch will be available sometime in 2015 and it will have health-monitoring capabilities for consumers such as heart rates and other information.  Apple Pay will be available sometime in October.

What are the privacy concerns?  First, consumers using Apple Pay will be providing valuable financial information such as credit and debit card numbers.  As Mr. Chen and Mr. Lohr note, that’s the kind of information that hackers want to get.  With the Apple smart watch, the health information that consumers will enter is not covered by regulations so there are no current controls for the ways in which that information is secured and/or used.  As Mr. Chen and Mr. Lohr wrote, regulators are becoming more interested in the range of health monitoring devices out on the market and are starting to raise issues about the need to protect this type of health information and make sure it stays private.

In announcing these new products, Apple said Apple Pay will not be storing any payment information on Apple devices or servers.  It said it will only be serving as a conduit between merchants and banks.  As far as the new smart watch, Apple has updated its guidance to app developers.  The guidance now states that developers of health apps who are working with HealthKit, the new set of tools for tracking fitness and health statistics, can’t use the personal health data gather for advertising or data-mining purposes — with the exceptions of using it to help manage some individuals health and fitness or for medical research.

Those exceptions are ones that could, however, be read very broadly.  It remains to be seen if Apple really enforces these privacy protections and whether consumers can really count on Apple to provide the kind of protection needed for these types of personal and sensitive information.


Facebook users will want to know about, and use, the latest privacy control that’s just been announced and will start rolling out soon.  This is a blue dinosaur that will pop up on users’ computer screens.   The tool will help users review their current Facebook privacy settings and, if desired, make any updates or changes to those settings.  It is a helpful way for Facebook users to be even more privacy pro-active.

Graham Cluley’s done an article about this latest feature and includes screen shots of the blue dinosaur (; “Facebook’s privacy dinosaur will check your settings for you”; September 6th).  The screen shots provide users with the option either of using the feature or not; if selected, the review show take only about one to two minutes.

Mr. Cluley urges Facebook users to select the “Let’s Do It!” option since the relatively easy process allows Facebook users to be doubly sure that they are sharing their information with the people with whom they want do.  As he notes, it will be easy for Facebook users to review both the people with whom they’re sharing information as well as any 3rd-party Facebook apps with whom the user is connected.  These reviews will allow Facebook users to confirm and/or refine these settings.

Using the new blue dinosaur is a way for Facebook users to augment their privacy.  But, as Mr. Cluley notes, Facebook users shouldn’t use the “Privacy Checkup” instead of, or as a substitute for, routinely doing their own checks and reviews of their Facebook privacy settings.


Get every new post delivered to your Inbox.

Join 74 other followers