Device, application , and eco system security concerns

This has already been in place in IOS since 2014 and Android since 2016. And both stores have additional methods such as checksums and file payload lists to enable intergrity. Even the Windows store now with Windows 11 does the same thing

So let’s assume for the moment we are talking about “legitimate” apps that a user might want to install but the developers don’t want to pay Apple’s and Googles significant fees. In your model who maintains, updates and audits what would essentially a good-app, bad-app, unknown-app registry? And who staffs and more importantly funds the resources to manage and maintain it on an ongoing basis?

Answer; It ain’t going to be voluntary and it won’t be without cost either. So does the developer pay a fee to some “body” to get their app recognized by these methods? To me that doesn’t seem fundamentally different than the App stores. Thus my belief that at least right now, the only option to enforce this is legislation that creates and enforces it. Which again is beyond the users control.

One other possible remedy would be that we go back to the days of paid upgrades that “funds” the ongoing resources to implement it. And one of the primary reasons that all of the OS makers stopped charging for OS updates was because many users didn’t upgrade because of lack of perceived value.

Not to mention I’d think you especially, as well as many others would have issues with any costed upgrade to fix what essentially is a flaw that should not have been there in the first place.

“To go a step further, if the .apk is not from the Store, then Google should allow other organizations to register as official signers (eg. Epic) for their distributed .apks. Similar, to the above, the OS will show to the user, that “this app has been verified by [organization]”, the version, and link to the app url. This will clarify the concept that other, reputable organizations can also distribute apps outside the Store— and habitually checking for a reputable source is important .”

See above, and who should pay for the significant costs associated with such as system to maintain it, update it, police it. And who decides… It’s a wall garden by another name.

Great in theory but again how do they actually implement, maintain and “police” this. And who should bear those costs???

SO in regards to “safe mode” that is a staggeringly difficult option to actually develop and maintain both technically and operationally. To start with, most safe modes (think the Windows version many of us are somewhat familiar with) block offsignificant access and features of the underlying OS when running in that mode. Even Microsoft 's flagship app suite Office, is hobbled/restricted to the point of being just barely usable in “safe mode”.

So what if a small 3rd party app requires access to key portions of the OS to even perform even basic functions? Duet, the screen sharing app on Android/windows comes quickly to mind. In your “safe mode” scenario, Duet simply wouldn’t function. So then what?

I agree, conceptually they aren’t. But actually implementing them is far more complex than you seem to believe, for all of the reasons I’ve cited before. And who bears the costs to actually do it? And “Mr Free Market” doesn’t that implicitly create significant barriers to entry?

As I keep repeating myself, this is extremely complex and multifaceted and something I have few solid answers to and which every company in my “realm” so to speak continues to think through and struggle with.

And I’ve already stated my extreme discomfort with what we’ve come to accept as part and parcel of todays operating systems and applications which is that they are never “finished” and that it’s acceptable to release with known flaws and vulnerabilities. And the only way I see to even start to change that is with some type of standards which have to have some type of legal enforcement behind them or they will be ignored

Again I have HUGE problems with all of this, and I’ve been vocal here and in my job about many of these as well. How many times I talked about the Spectre vulnerabilities which are still with us for the most part even in something like Intels Alder Lake which is Intels newest chipset.

TLDR the users already are bearing a disproportionate amount of the burden for their own protection while having little to no access to effective tools to do so. And it ain’t a learnin thang… :joy:

5 Likes

BTW: @Marty I have to hand it to you to for inspiring my longest post to date on this forum , and possibly on the old board too. :slight_smile:

3 Likes

I wish I knew a better approach, but I haven’t come up with it. We have to acknowledge that legislation is a 19th century process attempting to regulate technology on the verge of self-awareness in the 21st. If the best tool available is a hammer, hope the problem is a nail.

That is so true!

Just this Monday I was actually talking to some legislators around the ideas of trusted versus untrusted sources for app integrity. After 2.5 hours I think the only thing we accomplished was confusion and frustration for all involved. :frowning:

Been there. Imagine telling a legislator, who is pushing a bill to require 100% solar power for electricity, that the output of a solar panel at 3:00AM when it’s 29* (C) outside is zero (0), and he tells you that’s not true.

3 Likes

Here’s the fallacy gang - no one, even in the walled garden, truly believes it is paradise and fully protected. But when your “wolf” arrives the people who are “better trained” will have no better defenses than the “sheeple” inside the walled garden. It is a zero sum game boys - and the wolves have no apex predators above them to balance the environment. When criminals and sovereign states both have stake in an enterprise (like cyber insecurity) no amount of education and training will suffice. This is an issue that needs to be addressed by governments with severe punishment consequences - not the usual “turn the hacker into a consultant or government shill.” Think twenty year sentences to momma’s basement with NO outside communication, not even an old dataless flip phone.

3 Likes

That’s cruel and unusual punishment - on momma.

5 Likes

Did I really say that on the verge of Mother’s Day…OMG am I lame… :woozy_face:

4 Likes

And an actually on topic side step on our broad topic of security. We haven’t been briefed or even looked in to this ourselves yet , but this might be a step in the right direction at least on Android?

Though I can already hear the yelling from some quarters

@Desertlap, before I answer (and risk turning this into an uncontrollable word hurricane :stuck_out_tongue:), can we at least agree that users can learn about basic concepts in digital security, and it is really only a matter of communication via the OS?

I understand you have concerns about the practical implementation (which I intend to address), but every possible solution will ultimately be fraught with difficulties that will take many man-hours to hammer out. The objective of this thread I think should be focused on directions we can take to tackle the problem.

2 Likes

@Marty I’ve acknowledged that from the start and I’ve talked about in multiple threads going all the way back to the old board, how the users need to educate themselves for their own good, regardless because the resources aren’t available to them.

What you can’t seem to acknowledge is that there is way too much the user can’t do period, no matter how “educated” they may be.

And this is a pain point for me I’ll admit as far too often I see companies shirking their responsibility by trying to push it off on the users, and at least so far, you seem to be part of that camp.

1 Like

Alright, great! So we can agree that users can learn, and that it has value; I don’t think our positions are actually so different.

This is where I think you misunderstand my position. I am simply acknowledging the fact that companies have completely shirked that responsibility, as the end-user already bears full liability within the walled garden.

There is currently a gaping hole in legislation that allows companies, in the event of data breach for example, to simply issue a “sorry, here’s one year of Equifax, sort it out yourself”. Of course, this state of affairs can’t stand, and legislation is coming…

Now my question to you is, how confident are you that the public and their representatives will be conversant on matters of digital security when that legislation is being written? How easy is it for a lobbyist to use the excuse of user ignorance and dependence, to make ever greater demands from lawmakers?

Take a look at Right-to-Repair, it took a decade of continual effort by individuals/groups like Louis Rossmann and iFixit, to establish the very concept of “user serviceability” in popular consciousness. It was only at that point, that lawmakers began thinking from the public’s perspective, and corporations began responding to concerns to matters of sustainability and repair.

I am proposing a method to build that public awareness, but I am open to hearing other suggestions if you have them. :slight_smile:

1 Like

@marty I don’t want this to get rancorous, but so far what you have proposed is IMHO akin to the duck and cover drills as “protection” against a nuclear attack I did as a child in elementary school. And perhaps that’s hyperbolic at the moment, but I’m 100% certain that its a matter of when not if a software or hardware vulnerability, that when exploited will cause multiple deaths. eg.at a macro level someone hacks the control systems of the Hoover Dam and releases a flood, or at micro level, you or someone you care about gets their personal data compromised and then spends years trying to undo the damage.

And I don’t know how I could be clearer than I have been, which is that it it’s time for legislation backed with painful penalties for ignoring or flouting them. I have been in business long enough to know that business is only extremely rarely altruistic (because it cost money) .

And IMHO so far what you’ve offered is at best a pretty sound track for the disaster film.

And I’ll ask again, because you have dodged the question so far, what is it you actually do? I’ve been pretty transparent about myself including most importantly that I am “part of the industry”. And I’ve always stated here and the old forums, there is always an agenda, myself included.

And so to propose something very specific myself. Companies should have a fixed period of time to address a discovered flaw/vulnerability and if they fail to do so, a substantial , escalating daily fine should be applied. And that money from that fine could possibly even go as compensation in some fashion. And hopefully not just a free year of LifeLock :frowning:

1 Like

I would support this, but substantial needs to be SUBSTANTIAL. Look no further than the Apple response to the fines in the Netherlands for Apple’s in app purchases for dating sites. Apple did the BARE minimum and really didn’t comply; fines were up over $50m but what did Apple care. Same for their response to right to fix - the “kit” from Apple to repair your own phone screen or battery comes to nearly the same cost as sending it to them for repair.

I don’t want to sound like Warren, but BIG TECH is BIG and BOLD. You’re going to have to hit them with a telephone pole.

1 Like

That seems perhaps the very slightest bit dismissive? Perhaps “I have trouble seeing actionable solutions” conveys your sentiments equally well? :grimacing:

1 Like

Yeah more than a bit as I look at it, and my apologies to @marty as it was a cheap shot. That’s definitely a weakness on my part especially when I’m passionate on a topic as I am on this one. By no means an excuse though :frowning:

3 Likes

I think this is something we can all agree on is true, and a sad state of affairs, right? (asking others, not you @Marty)

IMO the users should have some responsibility, as in “if you send money to a Nigerian prince, the mail app provider is not liable”. But addressing vulnerabilities should be the responsibility of software vendors and marketplaces. That said - and I should actually read this whole thread (been kinda busy) - it seems really hard to legislate this. How do you fine companies for unacceptable lack of action without coming up with some kind of “scale of fixability” of compromises. It would be great if we could fine companies for not closing a blatant security hole months after it’s been discovered, but how do you actually do this? It feels like it should be like the way buildings are supposed to be “up to code”, but how to nail down that “code” for … code?

Maybe some evolving minimum standard of security can be developed, e.g. “Thou shalt hash thy password database”, but as I said, it seems to me like something very hard to codify / legislate. So many holes to plug, so many rules to write…

1 Like

Perhaps he was thinking of that recent realization that at night the sky is a lot colder than earth, and that this temperature difference allows the development of devices that use that temperature gradient to extract energy. Nah… probably not. :smiley:

100% agreement on the complexity and likely a major contributor in why very little has been done to date. But you have to start somewhere and unfortunately most of the free market correctives, such as going somewhere else (aka voting with your wallet) simply aren’t an option given how integral tech especially software is to our work and personal lives.

And now I’ve depressed myself…