Device, application , and eco system security concerns

As @dstrauss pointed out this morning this has been derailing several threads lately so perhaps by actually having a specific place to discuss it, we can avoid or at least reduce some of the walkabouts.

So with that preface, first of all this really is a complex and multifaceted topic. However I will start by stating some general opinions on the topic.

  1. Breaches and exploits are becoming, more numerous, more potentially dangerous ,and harder to detect, thus the urgency that some action(s) be taken.

  2. The individual users should bear some responsibility in that they, for one thing shouldn’t be knowingly pirating apps or content, and should heed the warnings in all of the major browsers now, when they warn a site/link may be unsafe.

  3. OTOH, expecting them to be primarily responsible isn’t the answer either, because for one thing, the exploits and hacks have become far more sophisticated than the user would know how to prevent or deal with, and IMHO they shouldn’t have to either given the role for the vast majority of tech in their lives.

  4. Government could and should play a role, but they have been both so ham handed and/or political generally, in the last decade or so that I don’t have a lot of faith at this point, witness the Declaration post from a few days ago. Or for instance the push in the EU to force all the phone makers to USBC.

  5. So that leaves the industry and possibly some more robust coordinated industry efforts that members pledge to adhere to. I’m thinking of certain ISO standards for instance. And I think that requiring the App Store as the only method to install an app is problematic at best, though it does provide at least a small amount of protection for instance with Apple’s review process, or Google Play store protect scanning.

I’'ll readily admit they are far from perfect though, and at least to date achieve only “better than nothing at all” status.

Ok, done for the moment but I do look forward to others views and I intend to participate as well.:slight_smile:


I have also been pondering this for a while and what I can’t understand is that when a cybersecurity agency is established with tax revenue, shouldn’t it also be reaponsible to aid the tax payers? Sending out warnings about present threats and how to prevent them, things like that.


Great discussion.

One of the challenges in this space is state actors both attacking and funding attacks on technology infrastructure. A few years ago, hackers allegedly working for foreign actors exfiltrated millions of records from the US government database of personnel files and background checks ( [Office of Personnel Management data breach - Wikipedia] - yeah, I know about citing wikipedia, but it’s easier than citing all the news stories )(Office of Personnel Management data breach - Wikipedia)

When foreign governments fund attacks on civilian technology infrastructure, there has to be national government involvement in the response. Similarly, criminal enterprises now attack technology as well as people and physical property without regard to national boundaries. Law enforcement isn’t capable of responding.

To @Kumabjorn 's point, we pay taxes and then we pay $$$ again in lost productivity and increased software/hardware costs to upgrade and constantly take time to patch.

There has to be a better way.


These are all great points and NO clear solution in my mind. As long as banks and stage coaches were easy targets, robbers knew no bounds. Theft and mayhem on the internet appears to be far easier with far fewer consequences. And even when we catch bad actors, the justice system is just not ready to handle the prosecutions as a deterrent. Throw in the fact that Putin and Xi ACTIVELY sponsor theft and sabotage, and you are in the crosshairs of indiscriminate cartel-level bad guys at all turns (and the FBI and CIA want to know why they can’t play in that sandbox as well)…

1 Like

Ok, I’ll have to issue my mea culpa here in knocking Dale’s wagon off course. :sweat_smile: (But he said the experiment was finished!)

But I’ll have to throw in my vote again broader education in the principles of cybersecurity, cryptography, and data management. It doesn’t help that these are all big words, but in practice, simple principles can be taught, and go a long way in reducing the average user’s exposure to threats.

I am not saying that big tech doesn’t have to shoulder some of the burden, but the reality is every tech company has had security breaches and they will always downplay and minimize the damage to the cusumer, while continuing to use security as the justification for more control of your digital life.

At a broader level, the public is the greatest driver of government policy that may eventually establish those industry standards you talked about. But this can only happen if the public understands the basic concepts that underpin digital security. We can the see beginnings of government action in the EU, but it is primarily a means to limit monopolostic practices, rather than establishing a more equal partnership of consumer consent and corporate responsibility into the future.

So with all the rambling out of the way, what I’m saying in practice, YOU are responsible for your own digital wellbeing, because ultimately, it your own data/privacy/finance at risk.

1 Like

@marty And once again I’ll have to point out that you are refusing to acknowledge reality.
There is simply NOTHING that 99.99% of us could do about a software/hardware vulnerability as we lack the skills and tools to do so. Plus there is no easily accessible compendium/knowledge base of even if we could find the skills and tools to do so.

Not to mention that more often than not, companies tend to hide vulnerabilities and flaws unless they think they are likely to get caught.

Short analogy here. There is an intersection about midway through my commute that for all practical purposes I can’t avoid. And for a variety of reasons, it frequently is non-functional. And additionally when it does fail, because it’s part of a major thoroughfare it becomes a free for all when it does fail.

So to follow your idea to it’s logical conclusion I’d either have to become a civil engineer and join the traffic department or “educate myself” on public transportation options. :frowning:

And I am using the traffic light analogy deliberately because it can literally be a life altering outcome with our dependence on essentially the technology roadways that we are all for all intents and purposes forced to use.

You must be a hard core Libertarian. And you have yet at least to my perception offer any real world and actionable ideas other than the “everyone needs to educate themselves”.

BTW; To reference an earlier thread where traffic circles came up, this is the one I was thinking of where it’s changed from a traffic light controlled intersection, and a traffic circle at least three times since I’ve been a resident.


@Marty - I have to side with @Desertlap on this one for this very reason. I remember all too vividly when even Norton and McAfee had vicious malware they couldn’t remove so they took you through complicated registry edits and file directory pruning to eradicate it (and I feel I’m fairly competent), but more often than not it came back.

I am a hard core Libertarian, but while personal responsibility is the preferred solution, what do you do when people open email attachments from real looking addresses, or popups saying you’re infected and click this to clean your computer? Much like Texas where we adhered to “personal responsibility” in the face of mounting Covid-19 deaths instead of demanding that people HAVE to exercise that personal responsibility. It is all good in the abstract but fails miserably in the field.


And this is precisely why I think that cyber securiry agencies have a special responsibility. If you don’t want the government’s fingers in the pie then you need to provide it through some other means. I’m guessing here but we have probably more than 3 billion internet users these days, you can’t expect each and every one of them to have the same nerdy interest in tech as this bunch of castaways.


Question: are “walled gardens” part of the solution or part of the problem?

Both IMHO.

One the one hand, you have the Google Play store where it’s pretty trivial to get an app on the store, and unscrupulous actors have then used that to create both cheap “knockoff” clones of mainline apps and also in many cases infected with adware/malware.

And further complicating this further, as a trickle down affect of Apple’s App Store , users over trust apps they download from it.

OTOH ask pretty much any developer and they can give you a litany of frustrations with the Apple app submittal approval process. Even in our tiny little niche, we have had our own custom apps rejected multiple times with little to no guidance on why it was rejected.

And not really an issue for us as we don’t sell our apps per se, because they are just tools to work with our devices, but the 30% apple takes on every sale is potentially crushing to potential new and small developers whose margins are already thin at best to begin with.

Not to mention apples arbitrary rules also keep potentially useful apps off the store and to make use of them users have “jailbroken” devices to make use of them. And thus open the device to malware coming from multiple vectors.

I think I’ve said in multiple threads that the stores are the best option we have, but that’s only because the alternatives are far worse.

Not to mention, the to my mind, potentially anticompetitive practices that all three of the major stores engage in to various degrees.

1 Like

Gentlemen, you’ve both hit the problem and solution bullseye. Mix in human nature to be controlling, greedy and nefarious, and we have the perfect storm. Instead of the internet being a wide open utopian paradise of free information, it is just another repeat in human history chaos, theft, greedy ambition, and government control…

Think this

Not this


I am surprised how many of you have consigned humanity to perpertual ignorance and feebleness. :stuck_out_tongue:

I think one thing you misunderstand about my point is that is not a mere platitude that people ought to learn, it is that they must learn, because the corporations are not the ones liable at the end of the day.

Read any terms and conditions of Apple or Google, and you will find that ultimately the user responsible for any damages that may come from installing any app, even if it’s signed off and downloaded from the official store—as it stands today, the corporations hold all of the control, but none of the responsibility to the user.

And how do you expect the laws and policies to change, if we the public cannot even begin forming a sentance regarding digital security? Surely if people cannot do it, their representatives won’t be doing it, and all of digital policy will be written by corporations.

Now take a look at something like the recent waves of Right to Repair, that Samsung, Microsoft and even Apple, have started to come on board with. Does the the average need to be an engineer to understand the benefits of user serviceability? No, they only need grasp the basics and wider impact on environmental sustainability and consumer choice to begin the process of responsive legislation.

All I am saying is the the solution needs to begin with the removal of the one-sided decision making in digital governance. If the users can’t understand, then help them understand. The other road is just more ignorance, more vulnerability, and more one-sided control (without liability for corporations) in neverending cycle.

1 Like

To further your point, I firmly believe no small part of the decision to embrace “right to repair” is to put responsibility (and cost) on end users/customers. I’m sure there will be lots of ALL CAPS WARNINGS that self-repair voids all warranties (especially the ones we pay for - that means you Apple Care).


Public Enemy - Fight The Power (Official Music Video) - YouTube


@Marty and you continue to not answer my main point. Name one actionable thing that any of the 99.99% of the users can do to fix or even avoid a software and/or hardware vulnerability…

Kumbaya or “sticking it to the man” isn’t going to help either. About the only practical thing I can see that the aforementioned 99.99% can do is vote with their wallet, but that’s both going to be a long time coming and will take something truly egregious and likely life threatening.

I know the libertarians believe that government never has a role, but sometimes they are the only ones that can. One of the best examples of that is the interstate system. But of course now that’s going to H### too due to neglect by those same folks ,and thus in their mind, proving their point, to the detriment of all of us.

and I’m actually the opposite here as I think government, laws, and the enforcement of them is one of man’s greatest creations.

and I’m not surprised by your seeming contempt for even reasonable regulation…

1 Like

One more thing @Marty I’m trying in what I hope to be a light hearted attempt to pin you down, to make a point which is how truly daunting (and out of their control for most users) this is.

I have a peer who manages network security for a fairly large media conglomerate. Last year she was hacked and a bunch or her personal data including financial was accessed. How she discovered it was when her checking account was emptied out and two of her credit cards were maxxed out.

To this day, No One, has been able to definitely determine the method and means of the attack.

And to your point, she is most certainly at least as “educated” as anyone I know. But she and her husband are still dealing with the havoc to their credit…

Sorry, I wanted to discuss the motivation before getting into the weeds, because I want to make clear, I’m not trying to turn the average user into an expert. The goal is getting the ball rolling on the long process of: basic security knowledge → public awareness of the freedoms/responsibility equation → effective legislation to regulate corporate control.

So to use a concrete example, let’s go back to your point about poisoned apps being the entry point for most malware:

Apple’s solution is to bar sideloading in any form, while Google’s solution is to have an “allow unknown apps” toggle without any guidance thereafter. They both foster only the vague notion that any outside app is ‘scary and dangerous’, without providing knowledge or context to the consumer.

What I propose, at the very least, is that the OS present to user a basic file signature scan of any .apk before sideloading. It will check against the database of hashes on Google Play and tell the user in simple terms if the app has been verified by Google, what version, and link to the app url. This is will clarify the concept that not all .apks are dangerous, some are merely copied from the Store.

To go a step further, if the .apk is not from the Store, then Google should allow other organizations to register as official signers (eg. Epic) for their distributed .apks. Similar, to the above, the OS will show to the user, that “this app has been verified by [organization]”, the version, and link to the app url. This will clarify the concept that other, reputable organizations can also distribute apps outside the Store—and habitually checking for a reputable source is important.

Now the last case, or modded or unsigned apps, the most ‘dangerous breed’. In this case, the OS should issue warning that the app has no known signer and may be malware. But instead of merely scaremongering, the OS should present the VirusTotal scan hit ratio:

Which will give a general indication of the danger level of the app, the scan url, and a call to action for the user to launch the app in “safe mode”. This will introduce the concept that each malware has different threat levels, and that every app install is a judgement call on the part of the user.

The “safe mode” should be an on-demand, memory-isolated envronment, where the user can preview the functionality of the app and decide to uninstall quickly if it proves to be different from expectation. When the user wishes to trust the app—and this is crucial—the user is prompted to select “permission levels” instead of the blanket, ‘allow all communications/storage’ access we have now.

For example, the ‘Low’ risk level, could be only temporary directories, no internet access; ‘Medium’ could be user profile directories and ask for internet connections each time; and the Full Access is the same as signed apps. Here again, the user is introduced to the notion that the user can exert selective control on an app.

These are not complicated concepts, are very feasible to implement, and would themselves, already decrease exposure to poisoned apps. But most importantly, they are a step forward towards introducing the public to concepts in digital security.

1 Like

@Bishop, oh no doubt. But in the case of walled gardens, the (legal) responsiblity already rests on the end-user.

In short, we hand over all the control in the walled garden, and assume all the liability if the walled garden is breached. Great deal, huh? :stuck_out_tongue:

This is what I’m getting at for “more equitable policy” in legislation as the end-goal.


I appreciate your candid and bold responses, Marty. I think you and I most closely see eye to eye on this matter. I would say neither of us are opposed to regulation such as roads, taxes, and so on but unnecessary regulation that takes back personal responsibility from the powers that be and that can necessarily be shouldered by the general population. My previous director at the healthcare company I work for specialized in cybersecurity in the U.S. Air Force and your suggestions align nearly verbatim with many of the common sense tactics he has brought up as a leader in the field in coaching users to be vigilant and think for themselves rather than rely solely on systems that go in and out of style in the tech sphere to provide them a security blanket.

Apple through the endless days of their marketing drum beat has gotten a good chunk of IT and system engineers to sing to their merry melody of a walled garden Shangri La, but what do you do when that walled garden paradise one day finally gets raided and the inhabitants-turned-sheep do not know how to think for themselves when the wolf among them devours the whole lot of them? Therein lies the danger of casting up this false perception that a walled garden OS environment is fully safe and secure when jailbreaking, a “white hat” or good guy form of hacking, exists precisely because nothing even from Apple is 100% bulletproof or foolproof. I believe the techniques found in walled gardens are good to implement such as sandboxing, but are not exclusive to them. As I see it, the Valve Steam Deck’s SteamOS is a powerful real world example of how we can have all the guard rails while also giving users the option to venture beyond the safety of the reef. SteamOS provides all sorts of abstracted levels of security that one would normally see in Android or iOS, and the security measures overrides are hidden behind layers of roadblocks that require effort to jump through.

For example, the user has to intentionally boot into the desktop environment, go into the command line, and then know and correctly enter the commands and even then enter them after creating a sudo password to modify the OS image which is configured to be immutable to the user, and even then changed applies will be rolled back and overwritten by a subsequent system update. Outside of games that run under Steam own’s proprietary confined sandboxed environment, all non-Steam games and programs installed at the desktop run as sandboxed applications known as Flatpaks and unless the user downloads a program (Flatseal) that gives these programs permission to access other system resources, they cannot see data or resources outside of each’s local region. And to go a step further, each Windows program and game that are installed there is given its own separate sandboxed virtual folder structure mimicking Windows where none has access to the other anything other than the per-program Windows folder tree.