vSphere 6.7 Will Not Run In My Lab: A Parable

CPU Icon“Hey Bob, I tried installing vSphere 6.7 on my lab servers and it doesn’t work right. You tried using it yet? Been beating my head against a wall here.”

“Yeah, I really like it. A lot. Like, resisting the urge to be irresponsible and upgrade everything. What are your lab servers?”

I knew what he was going to say before he said it. “Dell PowerEdge R610s.” I was actually surprised it was that new, and rack-mountable.

“Yeah, you’re out of luck. CPUs before the E3/E5/E7 family didn’t have VT-x extensions in them to make virtualization easy so VMware had to do this thing called binary translation. vSphere 6.5 was the last release that they supported that on because, frankly, it’s slow and everything associated with that technique is getting really old.”

“What the hell? You’d think they’d tell people about that!”

“What, an obscure KB article with absolutely no practical information in it and a reference in the 6.5 release notes to said obscure KB article didn’t catch your eye?” I say, dripping with sarcasm. “I think there was a warning that flashed on the console of affected hardware when you booted, too, but to be honest I only know that because someone mentioned it, I’ve never seen it myself.”

“That’s total crap, like anybody looks at the console. So now what am I going to do? All my gear doesn’t work.”

“One might argue it works just fine. 6.5 will be supported until November of 2021, you could stay on that. You could run 6.7 nested inside 6.5. I know this is a terrifying thought but you could buy some new equipment, too, something that was on a HCL this decade. Given the current generations of CPUs you’d probably be able to cut your VMware licensing in half while doubling your performance. Stick it to the man, or something.”

“Ha! Somehow I doubt my six licenses would attract their attention. I think I’d need four anyhow for vSAN. Maybe I’ll try the nested thing. Thanks man.”

As a side note to my parable here, if you’re thinking about this and have some time before you have to refresh your hardware it’s worth waiting to see how all this Spectre/Meltdown stuff turns out. None of the junk the ferenghis at Intel are shipping today is secure, at any level, especially given the latest wave of vulnerability disclosures. AMD might also turn out to be a good play moving forward, too, if they’re not in exactly the same spot because they blindly copied everything from Intel. The SSD shortages are subsiding so you don’t have to plan 60 or 90 days out anymore. Time will tell, so take some time if you can.

Midnight is a Confusing Choice for Scheduling

Clock IconMidnight is a poor choice for scheduling anything. Midnight belongs to tomorrow. It’s 0000 on the clock, which is the beginning of the next day. That’s not how humans think, though, because tomorrow is after we wake up!

A great example is a statement like “proposals are due by midnight on April 15.”

What you actually said: proposals aren’t welcome after April 14.
What you probably meant: you want the proposals before the date is April 16.

There’s a 24 hour difference there, and if you enforce the deadline accurately people are going to complain because they were all thinking the second thing (before April 16).

Similarly, this is a problem in change notices and customer communications. When you say there’s an outage scheduled for midnight there’s a very good chance someone will misunderstand when that is. Being wrong by an hour in the middle of the night isn’t so bad. Being wrong by 24 hours gets people riled up and you have enough problems as it is.

The second issue with midnight is when folks represent it as 12:00 AM. When you’re moving fast, as many people are, it’s easy to confuse with noon. Even worse when people mess up and write it as 12:00 PM, because in their head midnight is night which is PM. Except, of course, it isn’t.

Last, midnight is a popular time to schedule automated processes. I get it, it’s easy. If you run something at midnight you don’t have to do much processing to separate yesterday from today. The problem is that there’s a ton of stuff already running on the hour, and you’re just piling on. Most people try to avoid shopping when it’s crazy busy, why would you want to run your jobs that way? If you ran your job a bit earlier or later chances are it’ll run faster because you’re not competing with everyone else.

So instead of midnight, what?

 

1. If you care about time then act like you care about time and write your jobs the right way. Or, decide you don’t care about time so much and put a random sleep in them. Jobs don’t have to sleep long, just enough to avoid parts of the hour that end in :00 and :30.

2. Be strict about how you write your times. Write the date in the ISO 8601 format to help avoid global formatting issues (YYYY-MM-DDThh:mmTZD). Mind daylight savings when you add the time zone (-0500 vs -0600, etc.). Don’t be afraid to spell it out in two ways, ISO and how a non-technical reader would want to see it:

“2018-04-05T23:00-07:00 (11 PM Pacific Daylight Time on April 5, 2018).”

3. Don’t schedule things at midnight or noon. Chances are that if you’re scheduling something you could move it to avoid the issue. Deadlines could move to 2200 or 0600 without too much inconvenience, drastically reducing the potential for confusion. Scheduled work could be 2330 (and if you needed to wait until 0000 just adjust the length of the maintenance window). Even if you’re simply telling someone else that something is going to happen, pick a different time that’s clearly inside a specific day.

Time notation drives everybody crazy — look up some of the holy wars around server clocks set to UTC/GMT vs local time. Communication is hard, too, especially conveying technical topics to non-technical people. Let’s be mindful of these tricky spots and work to reduce confusion where we can. That way, instead of ridiculous & angry conversations about definitions of midnight we can have meaningful & clear conversations about the work itself.

No VMware NSX Hardware Gateway Support for Cisco

I find it interesting, as I’m taking my first real steps into the world of VMware NSX, that there is no Cisco equipment supported as a VMware NSX hardware gateway (VTEP). According to the HCL on March 13th, 2018 there is a complete lack of “Cisco” in the “Partner” category:

Cisco Missing from VMware NSX hardware gateway support

I wonder how that works out for Cisco UCS customers.

As I continue to remind vendors, virtualization environments cannot virtualize everything. There are still dependencies on things like DNS, DHCP, NTP, and AD that need a few physical servers. There will also always be a few hosts that can’t be virtualized because of vendor requirements, politics, and/or fear. Any solution for a virtual environment needs to help take care of those systems or it’s not a solution people can use. Beyond that, most people are unwilling to spend precious time and funds on two solutions. The most amazing solution for VM backup, monitoring, or security is useless if you don’t solve my entire problem, which includes the core dependencies I have running as physical hosts.

Folks like Rubrik and Veeam caught on and solved the problem with backup agents. Now we can back up the physical hosts we still have. Extending NSX services, especially security, to the physical systems would help immensely, too. This functionality is “table stakes” now, base functionality customers expect as we design new systems and refresh old ones, but lots of others are missing the boat, too. HPE only has two models of switches listed. Dell only has three. None of them are 25 Gbps. Most of them aren’t certified for recent NSX releases, either.

This seems like a fly in VMware’s NSX ointment. Is it weak demand for NSX that is leading to networking vendors not supporting VXLAN? Or is it terrible networking products that are causing a lack of NSX sales because of their inability to support these features? Whatever it is, this stands as a big opportunity for players like Arista to stand out and eat Cisco, Dell, and HPE’s lunches by being a big and reliable part of the solution, not just another perpetuation of the problem.

How to Troubleshoot Unreliable or Malfunctioning Hardware

CPU IconMy post on Intel X710 NICs being awful has triggered a lot of emotion and commentary from my readers. One of the common questions has been: so I have X710 NICs, what do I do? How do I troubleshoot hardware that isn’t working right?

1. Document how to reproduce the problem and its severity. Is it a management annoyance or does it cause outages & downtime? Is there a reasonable expectation that what you’re trying to do should work the way you expect? That might seem like an odd question, but sometimes other people do the procurement for (and without) us and there are gotchas they didn’t think to ask about.

In my case with the X710s I felt I had a reasonable expectation that the machine would stay up and that standard features like LLDP, which worked fine with other NICs, would work on these.

Being able to reproduce a problem is key. Intermittent issues are really hard to deal with. Get screen shots of the behavior, of the consoles, of the BSODs & PSODs. Get crash dumps if you can.

2. Check the Hardware Compatibility List for the particular OS and hardware you’re trying to use. Make sure it’s on there. If not, you might not have much success in getting support. The HCL might also have clues about driver levels and settings, too.

3. Check the vendor knowledge bases. At the time I was fighting the X710 issues there were no articles about it but now there are, and there are some suggested workarounds.

4. Update the firmware to the latest levels. You should be doing this already as part of your patching process. If you’re having issues your vendor’s support is going to ask you to do this anyhow, so might as well get ahead of it. Do it on the whole machine, not just the malfunctioning component, because sometimes the problem is an interaction somewhere else.

5. Update the driver to the latest levels. The VMware HCL often lists newer drivers you can apply via Update Manager. Try applying one of those. Sometimes a vendor like Intel will supply a newer driver than a server vendor like Dell will qualify. I usually try to stick with what the vendor who sold me the server has for drivers. For Dell & VMware, that often means installing with and/or remediating to the Dell customized ESXi ISO.

6. Update the OS to the latest levels. Again, you should be doing this for security reasons but on the off-chance you aren’t patched up to the latest levels do it and see if the problem persists. Support is going to ask you to do this anyhow. This isn’t saying you need to upgrade to Windows Server 2016 from 2012R2 or anything, just be at current releases of 2012R2. Of course, if you have the opportunity to test against another OS like that it might be a useful data point.

7. Open a support case with your vendor. Let them help you, or at least get it on record that there are problems. Ask for escalation if there isn’t timely progress.

8. Let your sales team know that you are having problems. Ask them how long you have to return the equipment since it isn’t performing correctly. Let them know you opened a support case. Let them know you need escalation because the support folks aren’t resolving your problems. Sales teams want you to be successful, and they absolutely don’t want the equipment returned so they’ll lean on their technical resources to fix your problem.

9. Let your management know that you are having problems. Often, vendors will be having separate conversations with management around business goals and whatnot. Executives need to know that a vendor isn’t delivering on their promises. I guarantee that the vendor isn’t going to bring it up in conversation so you need to. Besides, most executives & managers I know love a way to derail a sales pitch.

This is also very important if this equipment needs to be installed and operational in certain timeframes. Management might need to adjust project timelines, reset customer expectations, or do some damage control. Get ahead of it.

10. Let your purchasing people know that you are having problems. If this is new equipment they might want to get involved before they pay the vendor, or stop payment until this is resolved. Governmental & SLED entities sometimes have other mechanisms of recourse under their vendor contracts which can be very helpful.

11. Don’t be afraid to tell the vendor that their ideas aren’t an acceptable fix. For example, the LLDP problems on X710 cards have a fix in newer drivers, but it’s completely manual, and will not work if your card is partitioned.

If you need the partitions then you’re stuck with no LLDP, which is crap. If you have a large cluster or value your time (and even if you don’t your employer probably does) a time-consuming, hard-to-maintain manual fix is unacceptable, too. You paid a price premium for X710 cards and you expect them to be fully supported & functional in your OS. Frankly, you could have paid less and had a NIC that actually worked as advertised out of the box.

12. Have someone high in your organization start the conversation around returning the equipment. This is basically the nuclear option, but you might have to do it. If you’ve done the other steps here this shouldn’t be a surprise. In my case with the X710s we said “it’s been three months with no resolution, we either need to return this equipment or get replacement NICs.” Because we’d worked through it and offered them a chance to resolve it, and there wasn’t a resolution, Dell did right by us and got us replacement Broadcom NICs. Problem solved.

Finding a way through situations like these is half linear troubleshooting and half good communications. Make sure you are doing both. Good luck!

Intel X710 NICs Are Crap

(I’m grumpy this week and I’m giving myself permission to return to my blogging roots and complain about stuff. Deal with it.)

In the not so distant past we were growing a VMware cluster and ordered 17 new blade servers with X710 NICs. Bad idea. X710 NICs suck, as it turns out.

Those NICs do all sorts of offloads, and the onboard processor intercepts things like CDP and LLDP packets so that the OS cannot see or participate. That’s a real problem for ESXi hosts where you want to listen for and broadcast meaningful neighbor advertisements. Under Linux you can echo a bunch of crap into the right spot in /dev and shut that off but no such luck on VMware ESXi. It makes a person wonder if there’s any testing that goes into drivers advertising VMware compatibility.

Even worse, we had many cases where the driver would crash, orphaning all the VMs on the host and requiring VMware HA to detect host isolation and intercede. The NICs would just disappear from the host but the host would still be up. Warm reboot and everything is fine. I doubt it was random but we could never reproduce it. The advice from Dell & VMware was crappy, around shutting off the offload processing, updating the driver, updating firmware, double checking that we were running the current versions of everything, doing some crazy dance, slaughtering a goat. Didn’t change anything, we still had an outage a week.

Recently, and what popped this on to my list of complaints, was a network engineer coworker telling me he’s having a heck of a time getting X710 NICs to negotiate speed with some new 25 Gbps networking gear. When he told me what model NIC I just cringed, and had to share my experiences. “But the 520s were such solid cards,” he said..

Dell eventually ended up relenting and sending us replacement Broadcom 10 Gbps NICs for our blade servers. My team spent an afternoon replacing them and we’ve had absolutely no problems since (we did the work on “Bring Your Kid to Work Day” and gave the old X710s, which Dell said not to send back, to kids on a data center tour).

Back in the day we used to talk about Broadcom this way, all the problems their tg3 chipset had with offloads and such. It’s been a complete role reversal, with Broadcom being the better, more reliable choice in NICs now. Good for them, but in light of everything recently it’s an absolute shame what the monopolistic Intel, helmed by Ferengi, has become.

If you value your time or system reliability don’t buy Intel X710 NICs.

Update: Jase McCarty reports that newer firmware might fix some of these issues, and also provides some PowerCLI code for disabling TSO/LRO if you’re seeing PSODs (VMware KB 2126909). YMMV.

Update 2: John Nicholson reports:


Figures it’d be the vSAN guys with the details, at least around the PSOD/stability issues. Thanks guys.

Update 3: It appears that newer i40e drivers let you change the LLDP behavior under certain circumstances, but it still doesn’t work right by default, or if you are doing NIC partitioning. These drivers are as of February 9, 2018, which is several years after the release of these cards, and the fix is still a bunch of manual work. Just vote with your wallet and buy someone else’s NICs.

Fix the Security Audits in vRealize Operations Manager

Security Shield(I’m grumpy this week and I’m giving myself permission to return to my blogging roots and complain about stuff. Deal with it.)

Several bloggers have written about the Runecast Analyzer lately. I was crazy bored in a meeting the other day so instead of stabbing myself with my pen to see if I still feel things I decided to go check out their website. My interest piqued when I saw the screen shot where they show security hardening guideline compliance, as well as compliance with the DISA STIG for ESX 6. I do a lot of that crap nowadays.

You know what my first thought was about the Runecast product, though? It was “This is what vRealize Operations Manager (vROPS) could have been, especially the Security Hardening Guide alerts.” When it debuted, the vROPS security audit policies showed immense amounts of promise. They weren’t developed beyond that, though, and now someone is eating VMware’s lunch, to the dismay of all of us who actually own licenses for vROPS.

As someone who has to be deeply concerned with compliance regulations on virtualization systems, who is also an actual customer (not a partner, not a developer, not an analyst), here’s what I want improved with the vROPS security audit alerts:

Instead of a single, outdated, one-size-fits-nothing policy we need policies matching the current guidance for each supported version of ESXi, at each level (1, 2, and 3). I will stack up the policies to meet the level I need for a particular set of objects.

We need separate policies to match the guidance for virtual machines. Rolling the ESXi guidance up with the VM guidance is a mess. Separate them.

We need default actions to fix any deficiencies found. Just like you can resize a VM you should be able to disable the MOB on an ESXi host if it’s found to be enabled, fix the security on a virtual switch, or set a VM configuration flag. It’d be particularly sweet if it could just remediate a whole ESXi host or VM in one pass. After all, the product is “Operations Manager” and security is a massive part of operations, so make it able to manage that stuff. As my six-year-old has taught her two-year-old brother, “DUH.”

We need a policy for the DISA STIG (after 16 months we also need a prototype DISA STIG for ESXi 6.5, but that’s a whole other complaint). Lots of people use — and even more people should use — the STIG to harden their installations, and it’d be grand if life were easier for us people in federal regulation hell. The whole reason we spend gobs of money on these tools is to try to make things easier, but there’s always some catch. Hence this post.

The default vROPS policies should not (I repeat: NOT) complain about the new secure defaults in vSphere 6.5 being unset. It also shouldn’t complain about VMware Workstation parameters, or any other inane unexposed features it checks for. Just tell me if & when something is actually set wrong.

Last, the policies must be kept up to date. Maybe the vROPS team could just use a VPN service and secretly check the VMware Security web site from time to time (perhaps before a  vROPS update?) so they don’t have to actually talk to the weird Security folks. Whatever it is, just get it done, and don’t give me bullcrap excuses about competing with other parts of the ecosystem. vROPS was in this space first, fix it up and make it right for your customers.

Thank you. Sorry if you’re a vROPS person and offended, but hey, I said I’m grumpy this week, and I tried to be constructive. Fix your stuff. If you’re a fellow vROPS customer and agree with me, well, there’s nothing stopping you from sending this to your account team as a request for enhancement.

Mentioned Links:

How to Disable Windows IPv6 Temporary Addresses

CPU IconThe default Microsoft Windows IPv6 implementation has privacy extensions enabled, where IPv6 temporary addresses are used for client activities. The idea is that IPv6 has so many addresses available to it that we can create extra ones to help mask our activities. In practice these temporary addresses are largely pointless, and are very unhelpful if firewalls and ACLs are configured to allow access from a specific static address.

By themselves, IP addresses aren’t a good way to authenticate people but they often form another layer of defense. This is especially important for IT infrastructure where there often aren’t (or can’t be) sophisticated authentication mechanisms.

Paste these commands into an administrator-level PowerShell or Command Prompt and then restart your PC:

netsh interface ipv6 set global randomizeidentifiers=disabled
netsh interface ipv6 set privacy state=disabled

I also disable Teredo tunneling as well, so my traffic isn’t going places I don’t know about:

netsh interface teredo set state disable

Good luck!

Should We Panic About the KPTI/KAISER Intel CPU Design Flaw?

CPU IconAs a followup to yesterday’s post, I’ve been asked: should we panic about the KPTI/KAISER/F*CKWIT Intel CPU design flaw?

My answer was: it depends on a lot of unknowns. There are NDAs around a lot of the fixes so it’s hard to know the scope and effect. We also don’t know how much this will affect particular workloads. The folks over at Sophos have a nice writeup today about the actual problem (link below) but in short, the fix will reduce the effectiveness of the CPU’s speculative execution and on-die caches, forcing it to go out to main memory more. Main memory (what we call RAM) is 20x slower than the CPU’s L2 cache (look below for a good link showing the speed/latency differences between computer components). How that affects driver performance, workloads, I/O, and so on is hard to tell now.

Here’s what I think, based on my experience with stuff like this:

First, there are some people out there with gaming benchmarks saying there’s no performance impact. They’re benchmarking the wrong thing, though. This isn’t about GPUs, it’s about CPUs, and the frame rate they can get while killing each other online is mostly dependent on the Graphics Processing Unit, or GPU.

If you use physical servers that are only accessed by a trusted team, and you have excess capacity then you should remain calm. Doubly so if you have a test environment and/or can simulate production workloads. Don’t panic, apply your security updates according to your regularly scheduled process.

If you own virtual infrastructure and your company is the only user of it, insofar as everything from the hardware to the applications is run by the same trusted group of admins, don’t panic. Plan to use your normal patching process for both the hypervisor and the workloads, but keep in mind that there might be a loss of performance.

If you own virtual infrastructure and there are workloads on it that are outside of your control you will need to set yourself up to respond quickly to the patches when they are released. I wouldn’t panic, but you’re going to need to move faster than usual. I’d be getting a plan together for testing and deployment right now both for the hypervisors and the workloads you do control, prioritizing the hypervisors. Keep in mind the loss of performance. I might plan to start with a smaller cluster and work my way up to a larger one. I might be warning staff about some extra work coming up, and warning other projects that something is happening and timelines might change a bit.

If you use the public cloud I’d be looking up the Azure, AWS, and Google Compute Engine notices about this problem and seeing if your workloads will be forcibly rebooted in the near future. I’d also make plans to patch your virtual machines, and keep in mind the possible loss of performance depending on your instance type.

If you use containers I’d make sure that your baseline images are all patched once patches are released. Likewise with template VMs, if you don’t have a process to bring them to current immediately upon deployment or build VMs dynamically.

I would stop trusting all internet-supplied VM appliances and container images until they have documented updates. If you didn’t build it yourself you don’t know it’s safe.

In all the scenarios I’d be doing some basic capacity planning so you have a baseline to compare to, auditing to make sure that applications are patched, and auditing firewall rules and access control.

As the British say, keep calm and carry on. Good luck.

Intel CPU Design Flaw, Performance Degradation, Security Updates

I was just taking a break and reading some tech news and I saw a wonderfully detailed post from El Reg (link below) about an Intel CPU design flaw and impending crisis-level security updates to fix it. As if that wasn’t bad enough, the fix for the problem is estimated to decrease performance by 5% to 30%, with older systems being the hardest hit.

Welcome to 2018, folks.

In short, an Intel CPU tries to keep itself busy by speculating about what it’s going to need to work on next. On Intel CPUs (but not AMD) this speculative execution doesn’t properly respect the security boundaries between the OS kernel and userspace applications, so you can trick an Intel processor into letting you read memory you shouldn’t have access to. That’s a big problem because that memory could hold encryption keys & other secrets, virtual machines, anything.

So what? Here’s my thoughts:

  1. All of our systems just got 30% more expensive. Put another way, we are all about to lose 5-30% of the systems we paid for, if they’re built on Intel hardware. That includes network switches, storage arrays, traditional servers, everything.
  2. I’m guessing there’s a class-action lawsuit in the works already against Intel, if only to establish whose fault this is (not Dell, HP, etc. but Intel’s).
  3. We don’t know the effects of these updates yet, insofar as whether the performance hit will be global, just to CPU or memory, just to I/O, or some mix. We also don’t know how workloads will react to this. If you don’t have a proper test and/or QA environment you’re going to fly by the seat of your pants for a bit.
  4. What we can surmise, though, is that all system benchmarks are now null & void. This is an epoch, the great extinction of performance data from vendors. As of right now any sizing or performance data offered by a vendor needs to meet with questions around when that data was gathered, what OS levels & patches, and probably should have some written guarantees in the contract.
  5. If you have a system or application that’s Intel-based and within 30% of “full” you probably should start thinking about your options, especially if it’s on older hardware.
  6. If you aren’t collecting performance data from your systems you should get that going. There are lots of options, from established vendors like Solarwinds, newcomers like Uila, to open-source tools like Observium. Historical performance data is essential for assessing a situation like this, as well as system sizing and troubleshooting.
  7. Microsoft has announced that Azure instances will rebooted on January 10, 2018. AWS is dancing around the same message. They don’t have live migration, like vMotion, so it’s a huge deal when they decide to fix something like this. The speed and scope of the reaction should tell you how important this is. It also should delineate how helpful things like vMotion are in a VMware vSphere environment, where you’ll be able to update the infrastructure without taking applications down (versus the public cloud which doesn’t live-migrate workloads). Yes, in an ideal world applications are built to not care, but very few of the world’s companies have their systems set up that way (and a discussion for the comments or over a beer).
  8. Remember that the public cloud will take a performance hit, too. Yet one more way the public cloud DOESN’T actually help IT. At least a SaaS application means it’s someone else’s problem, though.
  9. Companies that don’t patch won’t have a problem with this, but that’s gross criminal negligence (e.g. Equifax, etc.) and should be the subject of whistleblowing action from here on out. Companies that do patch are getting screwed, of course, but this is solid due diligence and part of the cost of doing business. Truth is, regular patching is the #1 way to prevent security problems, but defense-in-depth is equally important (multiple other security controls that can help mitigate a problem like this until you figure out what you’re going to do to fix it).This update isn’t going to be avoidable for long, so you might as well suck it up and deal with it.
  10. I’d bet HPC/supercomputing folks won’t apply this update, though, but hopefully they have an understanding of their workloads and defense-in-depth. Losing even 5% of a system like TACC’s Stampede would hurt. Also seen another way, Intel’s insecure design practices just made things like cancer research 5-30% slower.
  11. If you don’t take snapshots or image-level backups now might be a time to try it, so you can roll things back quickly. Remember, though, that snapshots are a performance hit on their own. Rolling back the OS patches might be acceptable, too. The point is to have an answer to the question “how do we go back to the way things were after this patch is applied?” You might need to buy yourself some time to cope with these updates.
  12. AMD is probably going to try to make hay here, because they’re not affected. However, AMD systems have classically had problems of their own, such as bugs that ended up disabling all L3 cache, etc. There’s no high ground to be occupied by them. As always insist on actual performance data around vendor promises, and insist that those promises get documented, preferably in contractual form.
  13. Sysadmins are merely the messengers here, but we need to begin communicating this problem to the business around us. Our managers, VPs, CTOs, CIOs, everybody. This is an all-hands issue. The effect on IT is clear, but if we get ahead of it with our management stacks it’ll demonstrate our competence & security-mindedness. It’ll also clear the path for when we ask to buy something to cope with the 30% capacity hit.

As always, good luck.

Update (2018/01/03): Should we panic about the KPTI/KAISER Intel CPU design flaw?

Apple Deserves What It Gets From This Battery Fiasco

AppleYesterday Apple issued an apology for the intentional slowing of iPhones because of aging in the iPhone battery. As part of that they announced a number of changes, like a $29 battery replacement and actually giving people information and choices about how their device functions.

This says a few things to me. First, it says that have gouged consumers for the cost of a battery all these years. Second, it tells me they are scared enough of these class-action lawsuits to admit fault publicly.

There are a million reasons why an iPhone might perform poorly, especially after an upgrade. This has little to do with the battery, and likely more to do with background maintenance tasks that happen after an OS update. Of course, I am guessing at this, because Apple never tells anybody anything about what is going on. Don’t believe me? Look at the release notes for a software update. They don’t tell people what they fixed or what they changed, or when they do it’s either a lie or a lie of omission. “Improves power management during peak workloads to avoid unexpected shutdowns” is what iOS 10.2.1 said. The word “improve” is a blatant lie, given what we now know about their fix. Perhaps they also feel Steve Jobs’ health has improved since his death.

Beyond lying, they don’t expose controls to users that might allow the users to customize behavior or make choices. After all, they’ve been throwing shade at PCs for years essentially saying choice is bad because it might add complexity. They make it very difficult to service devices which forces people to choose between Apple’s own now-apparent price gouging and a third-party that might disable the device. Apple builds their devices in ways that make common end-user repairs very risky, while saying that those measures are for our own protection. Nor do they expose information about the devices that might enable a user to make informed choices on their own, or enable an honest secondary market for these devices.

The net effect of all this tight-lipped behavior is that they have opened themselves up to legal action from everyone that has a slow iPhone 6, 6s, or 7, for any reason. The average consumer now has very plausible reasons to think that Apple is and has been screwing them into buying new iPhones. After all, Apple has a long history of being dishonest. Look at the iPhone 4S and the faulty water detection devices. Look at Antennagate and all the other problems with cracking, bending, and subsequent screen malfunctions that they blamed on user behavior instead of their own impeccable design. Watch their “geniuses” at an Apple Store weasel out of covering anything under AppleCare. Observe how they’ve quietly brought back the DRM that Steve Jobs removed. Look at their corporate behavior, talking out of both sides of their mouths about their Congressional lobbying, as well as their hiding money offshore. They have made this bed for themselves.

Some folks are saying that this was a colossal communications error on Apple’s part. For a company that prides itself on appearing intentional about everything they cannot say now that this was a screwup. It was a calculated risk, a big bet on a massive lie of omission. They could have chosen to expose battery information in iOS, like my PC laptops have done for decades. They could have written their battery “explainer” then, too. Instead, they bet that they could keep their secondary market & repair lock-in and the status quo by hiding it all, all while their sales go up. And up they went, to record valuations of their company based on sales they dishonestly forced.

So here’s to hoping that the worldwide legal system gives Apple the comeuppance they are due. I hope it’s big enough to cause stock losses, penalizing the investors that support such ongoing dishonesty. More than all of that, though, I hope this is a warning to other organizations. Up-front honesty is always the best policy, even if it seems hard. It never — never — gets better if you let your customers figure it out themselves. And they always will.

%d bloggers like this: