Category: Industry

  • Apple pulls data protection tool

    Apple pulls data protection tool

    2025-02-27 12:16:23: And another. This is going to run a while I think!

    US Probes UK’s Apple Encryption Demand for Possible Treaty Violation

    2025-02-26 21:23:50: There’s a couple of relevant press articles that poped up after I wrote this:

    US intelligence head ‘not told’ about UK’s secret Apple data demand

    Apple’s Data Encryption Changes in the UK Explained

    ====

    I recently wrote about how the UK government had demanded access to user data worldwide, and things have since moved on. Apple, as far as I can tell, has not fully complied with the order—remember, this demand applies globally, not just to UK citizens. What Apple has done is remove the general end-to-end encryption tool known as Advanced Data Protection (ADP) for UK users. But that’s it.

    From a quick straw poll of several iPhone users, I found that most (around 95%) hadn’t even turned on ADP. So how big an issue is this really?

    The Bigger Picture

    I think the wider issue is a little misunderstood, but to be fair, it’s a complex one. Essentially, if you use a cloud service that isn’t end-to-end encrypted, the provider has access to your data. This means they can be compelled to hand it over to governments when legally requested. That’s not new.

    What is murkier is the growing suspicion that even providers of end-to-end encrypted services may have been forced to insert backdoors—and legally, they can’t disclose it. That, I find horrific.

    Why Apple, and Why Now?

    It’s interesting how many people think this is just an “Apple issue.” I’ve seen people say, “I don’t use an iPhone, so this doesn’t affect me.” That’s not true. Apple just happens to be at the center of this particular story. Other tech giants likely face similar requests, but due to legal gag orders, they cannot disclose whether they’ve complied. Does that make anyone else uncomfortable?

    Apple has said little publicly, but the removal of ADP in the UK seems to confirm compliance, at least partially.

    If you back up your Android phone to Google, those backups are not end-to-end encrypted. If you store data in Microsoft 365 (Office 365), that’s not end-to-end encrypted either. What does this mean? It means the government can request your data, and Microsoft or Google can legally access and hand it over. Even Microsoft 365 Customer Lockbox doesn’t prevent this—it’s merely an administrative control, not a security barrier.

    The Real Issue: End-to-End Encryption

    So why the uproar over Apple’s ADP? The key issue is end-to-end encryption. When enabled, even Apple cannot access the data you store on iCloud, meaning they cannot comply with data access requests. Now, with ADP revoked for UK users, a significant portion of that data is once again accessible to Apple—and, by extension, to governments that request it.

    What’s Still Encrypted?

    To clarify, ADP encrypts everything a user stores in iCloud with end-to-end encryption. Without it, data is still encrypted, but Apple retains the encryption keys—meaning they can access and disclose it if required. However, some iCloud services remain end-to-end encrypted, even without ADP:

    • Passwords & Keychain
    • Health data
    • Journals
    • iMessage (not including cloud backups)

    For a full list, check out Apple’s iCloud Data Security Overview. Anything labeled “end-to-end” means Apple has no access.

    NOTE: If you backup your iPhone to iCloud, messages are included in those backups, which makes them accessible.

    The Trust Issue

    What really concerns me is how many other providers have been forced to weaken end-to-end encryption — and have complied without anyone knowing. WhatsApp is supposedly end-to-end encrypted, as is Facebook Messenger, but do we trust that there isn’t now a backdoor?

    I suspect several MPs are quietly backing away from their WhatsApp groups as we speak.

    What Happens Next?

    This story isn’t going away anytime soon. Apple hasn’t fully complied—can you seriously imagine they would? The UK government demanding access to a US citizen’s iCloud backup would be a legal minefield. Can you picture Apple’s response to that?

    I’ve also seen a lot of “I’ve got nothing to hide” responses. That’s a flawed stance—it even has a name: The “Nothing to Hide” Argument. Privacy isn’t just about secrecy; it’s about maintaining control over personal information.

    So where does this leave us? If end-to-end encryption can be quietly removed or bypassed, is any cloud data truly private anymore? I’ll be watching closely to see what happens next….while also privately encrypting my own stuff.

  • Ember Heated Coffee Mug

    Ember Heated Coffee Mug

    I little while ago I was moaning on the Internet (shocked you are, I’m sure) about how I keep leaving half drunk cold cups of tea/coffee everywhere…anyway, somebody took some pity on me and told me they were sending me a little gift. What turns up but an Ember Heated Coffee Mug in stainless steel.

    When I took this out of the box I couldn’t work out whether I thought this was the stupidest idea since the invention of stupid ideas, or whether it was going to be the best thing ever. That’s not something that often happens to me and tech, I usually know pretty quickly how I’m going to feel about something.

    Fundamentally, all this thing does is allow you to set a temperature for your drink, and the mug will keep the drink at that temperature. For example, I like tea/coffee at about 57/58 Celsius. I connect the mug to my phone, use the app to set the temperature to the one I like, and then fill it with my drink. If the drink is less than the temperature I want, it heats it up. If it’s hotter, it lets it cool until it hits that temperature, and then it maintains it at that temperature. All rechargeable battery powered by a funky desk charger (more on that shortly).

    Image shows a screenshot from the Ember app running on Android
    Ember Application

    So, either the stupidest thing ever, or brilliant. Which is it? We’ll get to that.

    Does it work? Fundamentally, absolutely yes. If I make say a mug of tea it’ll keep it around 57 degree for a good 90 to 120 minutes, which is more than enough time for me to find it cold four hours later, but to get the odd hot mouthful along the way. From that perspective it works really well.

    Let’s get back to those charging pads – they are not standard wireless charging pads – they’re unique to the Ember mugs. From a low charge the units take about 2 to 2.5 hours to fully charge – that’s quite a long time, however I found it’s never a problem as I tend to top them up as and when I’m using them – I.e., there’s a pad on my desk that I tend to use. In addition, whereas are you going to keep it other than on its charging pad?

    The stainless steel looks good too – it’s a very nice finish and very easy to keep clean. It’s not however very big at 295ml in volume.

    So was it the stupidest thing in the history of stupid or…? Well, given that 295ml was a little small for me I now have another one, bought with my own money. This one is in black and is a larger 414ml volume unit, rather than 295ml so some 40% larger by volume. So yeah, I’ve learned to really like the thing, and I absolutely must do to have spent 150 GREAT BRITISH EARTH POUNDS on one. Yeah. They’re expensive – real expensive.

    They do however fulfil their function, and they do it well.

    It’s not all joyous however, there are some things that bothered me – and I’ve managed to resolve most of them. So let’s look at those annoyances.

    The Charging Pad Uses a Barrel Connector

    Why for the love of everything USBC-C is the charging pad provided with a plug with a barrel connector. That’s really, really annoying. I don’t want to be carrying another plug about if I don’t need to, or having to plug something it for some non-standard device. Boo Ember, BOOOO. Saying that, I did find a solution – and it cost me a fiver. The solution is a Type C USB-C Female Input to DC 5.5 * 2.1mm Power PD Charge Cable fit for Laptop 18-20V from Amazon. This cable has USB-C on one end, and the correct barrel connector on the other. A little caveat however – I had to trim down the plastic sheathing on the barrel connector to allow it to fit properly on the charging pad. Once I’d done that, it works fine.

    Some other observations with charging. It must be connected to a USB-C PD port. Interestingly, from a consumption point of view, you’ll see the unit peak at about 30-35w charging for a few minutes, before dropping back to circa 2-10 watts during the charge. It then seems to short-burst charge rather than constant trickle – that’s a bit odd. It’s a shame it’s so low as that’s why it takes so long to charge – although like I say, I’ve not noticed it being a problem, and I’ve rarely found it without charge.

    Image shows the power consumption of an Ember Mug while charging.
    Ember Mug Charging Power Consumption

    A lid!

    I don’t like having open tea/coffee mugs sitting about, they’re too easy to spill and I always have tech stuff about. Nobody wants to be in an episode of Chris Shaw and the Drowned Laptop. The units are fairly base heavy – the 295ml unit is about 400grams, with the 414ml one about 457grams – but they’re still full of liquid.

    Fortunately however you can get lids – just be careful that you get the right lid for the right mug size!

    295ml Ember Stainless Steel Lid

    414ml Ember Ceramic Lid

    Each is another 15GBP of course – the price of being into consumer tech can be uncomfortable.

    The App

    Ah, the app. Now initially this went pretty well. Setup was easy, it found the mug, it did a firmware update (on a mug – what a time to be alive). Admittedly I didn’t need to use the app very often. I have the same temperature for tea & coffee, so I set it, and forget it. The only time I need to use the app is to change the temperature or if I’m curious about the charge state.

    Then, I imagine the Ember Software Development Team spent a little too long attending classes on Software Design run by Sonos. For a few months the app was buggy, and a huge pain in the backside. It would often lose your config requiring you to login in again, or lose the mug completely requiring a complete reset, or completely ignoring whatever you set in the app etc. Yeah, a complete Sonos.

    Fortunately they do seem to have resolved that now. The app now (on Android at least, I haven’t really tried it on my iPhone) seems fairly stable and it’s rare I have a problem with it.

    image shows the ember app with two configured mugs.
    Ember App

    Summary

    So should you buy one? Absolutely not, unless you like the idea and the amount of money involved won’t stop you paying your mortgage. If that’s the case, get one immediately. I think they’re ace! I’d be bit wary of the Ember travel mug however. My travels around the various Reddit forums seem to indicate those units are not well liked – although to be fair here this is an anecdote not based on any real data.

    They’re now in regular use in my house, and I rarely have a drink in anything else. I have several drink mugs – things like the Yeti Mug – and while they’d good, they offer a different problem. Often with those the drinks are still too hot for quite a while after you’ve made them! With the Ember they seem to cool at a reasonable rate, they just maintain the temperature you set.

    I do wonder how long the battery will last (in terms of lifetime resilience), but again I’ve no real data on that. Would I be happy if they lasted say 3 years? I’d hope to get longer, but I’d imagine that’s a reasonable timescale for them.

    Anyway, if this is coming across as a confused tech review, it’s because I’m slightly confused by the product. Love what it does, don’t like the barrel charger, and more importantly the stupid cost.

  • Smart Plugs – It’s a trap!

    Smart Plugs – It’s a trap!

    I know some of you may find this a bit of a shock but I think I got a bit carried away with the smart home thing. For a while there you could only really turn on any lights, or use any of the several kitchen appliances, by shouting at another appliance to turn the ****ing thing on. I could often be heard arguing with my kitchen when all I wanted was a bacon sandwich.

    The idea is THE dream though, isn’t it? So Back to the Future – ‘Turn on the Lights!’.

    Anyway, while I can still control most things in my house by voice, I rarely do. One thing that has survived the smart home cull however are the smart plugs.

    There’s a few reasons for that:

    -> I use a lot of power. My home labs cost a bit to run, so I try now to turn off what I can when it’s not in use.

    -> I want to know what power I use. I need to expense some.

    So I have ended up with two types of smart plugs – there’s the original ones that I bought which were single plugs that either could control one device, or of course could connect to an extension lead. The ones I used were the Meross MSS310 units. These have proven very reliable with a decent app. I can of course turn them on/off by voice – ‘Turn off the TV’ for example – and I do still do that sometimes. You can also setup routines so ‘Leaving the house’ turns off everything you’d not want on when you’re not there for example. That hasn’t had a lot of use, as I just don’t go anywhere.

    More importantly however the power tracking from these has proven really insightful and useful. The following for example shows the current power draw (1) of my core lab, and its power-usage for last month(2), and the cost for last month (3). Yes, the cost. I can tell it the cost per KwH and it works it all out for you.

    Image shows the power draw of a smart plug called 'Core Hub'
    Core Hub Power Draw

    I’ve found this really useful. Can you see the trap coming yet?!

    Knowing the power consumption of things has helped me knock about a third off of my power bill. That’s mad. There’s also environmental benefits to that too of course. I just no longer leave things running. My backup NAS only backs stuff up at night for example, there was absolutely no reason for it to be on for the other 22 hours of the day. The power analysis helped me work out that stuff.

    This has however led me on to wanting to understand more. (The trap, it’s coming). So I looked into and invested into smart power strips. These are similar devices but essentially each plug on the power strip is its own smart plug. The ones I opted for were the TP-Link TAPO P304M. They cost me about 25 quid on Amazon, and are very easy to setup.

    What these give you is the ability to setup collections of devices, and of course to setup automations. My real power users are my ‘core’ – which is my lab/storage etc. – and my desk. So I have fully configured both with these power strips. The app you can see all of the plugs – I.e., all of them, everywhere, or by collection – in my example, by ‘Desk Stuff’ or ‘Core’.

    Image shows a screenshot of the TP-Link Software for their smart plugs running on an iPhone
    TAPO App

    Now I can both control each individual plug on those strips each by having an automation process, or individually. So for example I have ‘Full Desk Off’ that turns off absolutely everything on my desk, and just a normal ‘Desk Off’ that turns off everything while leaving my charging ports for phones etc. all live.

    Image shows automations in the TP-Link app for their smart plugs.
    Power Shortcuts

    You also get a significant amount of power information for each plug on each and every strip. Oh my word, my advice is you need to be careful with this. If you’re not careful there will be SPREADSHEETS. This for example is the power consumption of my Mac mini M4 Server – this is on 24×7 and runs my Plex, and some other automation processes.

    Image shows the power monitoring of a single port on the TP-Link Smart plug app
    Single Power Energy Consumption

    As a quick sidenote, those M4 Minis are fab low power units for Plex and general automation type stuff. Mine is only the base model 256GB/16GB unit, however it handles everything I’ve thrown at it, including a couple of VMs, just fine – while absolutely sipping on power:

    Image shows the power consumption of an Apple Mac mini M4
    M4 Power ConsumptionScreenshot

    It’s usually lower than 15w – the above is when it’s busy! I also run it in low-power mode too as I rarely need its full performance. I mean the toughest thing I ask it to do is some video conversions and for those I don’t really care if it takes 2 hours or 5.

    The Trap

    The trap with this stuff is that you can, if you’re not careful, become slightly obsessive about power monitoring! Like I say, I have full costs now on my always-on stack etc.

    Image shows a spreadsheet analysing the cost of running various items for 24x7x365
    Cost Analysis

    Summary

    I’m really happy with both the Meross plugs and the TP-Link power strips. They both seem to be fairly accurate on the power calculations – I’ve plugged one into the other to compare – and they’re within 2-3% of each other. I like the apps. The Meross app is arguably slightly nicer to look at and simpler to view, but it’s not a huge gap. Would I prefer them to be the same app…? Of course. I made the mistake however of having a power strip given to me to play with….so then ended up investing in the TP-Link ones myself, hence the two apps. It’s not a problem though, as I tend to use them for different things.

    The Meross single plugs I use for measuring and controlling collections of devices, whereas the TP-Link ones I’m interested in measuring and controlling individual items. It works brilliantly for this purpose.

    Like I say, I’ve stepped back a little from fully voice-automating stuff. The lights thing and controlling the kitchen were particularly challenging on that front – but both apps fully integrate to most voice services such as Alexa etc. so you can do that if you want.

    Most of the automations I use are on my phone and from the Tapo app, and they work really well.

    Now all I need to do is ween myself off obsessing about the numbers. I like numbers, they’re pretty!

  • UK demands access to Apple users’ encrypted data

    UK demands access to Apple users’ encrypted data

    This story has been doing the rounds this week, and it’s blowing my mind that there isn’t more noise about it.

    Image shows the BBC News Headline 'UK demands access to Apple Users' encrypted data
    News headlineScreenshot

    The UK is demanding that Apple put in a back-door to their encryption system that would allow the government to view anyone’s data held in iCloud. Not only that, Apple are, by law, not allowed do tell us that’s what the government is doing. I could not be more WTF without turning myself inside out.

    The scope of this is also huge – it’s access to encrypted data worldwide, not just for people in the UK. I mean, come on. I see the US has already started to kick off about it.

    They urge her to give the UK an ultimatum: "Back down from this dangerous attack on US cybersecurity, or face serious consequences." The BBC has contacted the UK government for comment. "While the UK has been a trusted ally, the US government must not permit what is effectively a foreign cyberattack waged through political means", the US politicians wrote. If the UK does not back down Ms Gabbard should "reevaluate US-UK cybersecurity arrangements and programs as well as US intelligence sharing", they suggest.
    Screenshot of BBC News

    I can partially – I think, so far – accept that the government’s intentions are not to generally search and analyse people’s data through some form of mass surveillance…but I can’t imagine that conversation hasn’t come up. No doubt using the ‘won’t you think of the children‘ defence.

    This idea of opening up a back-door into end-to-end encrypted services is a bit terrifying from a technical perspective and from a general understanding point of view. Do you genuinely think that it’s beyond the realms of thought that a method to exploit that back-door wouldn’t be found…? Or do you think it would only ever be used by the good guys.

    I was having this conversation with a few non-techie friends recently (I have some), and they didn’t see the problem. Here’s the thing though, it would mean the government could see their data, but any bad-actor with half a brain would still easily be able to protect their stuff.

    The only data this gives the government access to are idiot criminals and every member of the public. Let me explain. 

    Let’s say I’m a bad guy, and I want to have a conversation with another bad guy – let’s call him Donald. Now, I want to use publicly available end-to-end encrypted services such as WhatsApp or iMessage, but I know the government has access to that data via their back-door (fnarr).

    Oh my! What do I do! Well, I do what any sane person would do and encrypt my data using my own keys before I used that service that the government has access to. Hell, I could use far stronger encryption than was originally implemented in WhatsApp or iCloud anyway.

    So where are now in that scenario? The bad guys have secure comms, and everyone else’s data is exposed to the government. I suppose there’s an argument that if the government saw you were using private encryption that you’d stand out, but what are they going to do…outlaw the use of encryption?

    This is such a bizarre and unnecessary attack on public privacy, obviously designed and implemented by people who have little idea of how encrypted communications work.

    Imagine what other back-doors they’ve asked for – HTTPS for example, for your banking apps or everything else?

    Why you’re not furious about it is beyond me.

  • New Home Lab Beast – Minisforum MS01

    New Home Lab Beast – Minisforum MS01

    I’ve been in the hunt for new home-lab virtualisation servers. I previously used two 2018 Mac mini i7/64GB units. They have been solid units, and have served me well. I used Parallels Desktop for 90% of the virtualisation, with some VMWare Fusion in there too. They’ve lasted YEARS and have been rock-solid…but, their performance against their power consumption has been lacking compared to current offerings.

    So I took a dual approach – for my constant stuff that needed to be on all the time (backups, some video conversion automation, AdGuard type stuff) I bought an Apple M4 Mini. More on this in another article, however it sips power while also being highly capable.

    For my lab stuff – think 90% Windows, 10% Linux – I needed something x86. First, I looked at Geekom and their Mini IT13, and it was disastrous. I actually bought one direct from Geekom, and three from Amazon. All of them after a week or two just wouldn’t turn on.

    Picture shows three orders of the Geekom PC from Amazon.
    Amazon Geekom Orders

    I returned them all – so much so Amazon put me on the returns naughty step so I had to get AmEx involved, who were, as usual, absolutely badass at getting my money back.

    This is when I stumbled on the Minisforum MS-01. The specs on this thing seemed out of this world.

    -> Intel i9 13900H

    -> Dual DDR5-5200 up to 96GB

    -> 2 x USB4

    -> 2 x 2.5Gb Ethernet

    -> 2 x 10Gb Ethernet

    -> HDMI

    Have a look for yourself at all these ports – absolutely mad.

    Image shows the back of the MS-01 including 2 x SFP+, 2 x 2.5Gb LAN, 2 x USB 4, HDMI, 2 x USB
    MS-01 Rear

    Internally, the unit supports up to three NVMe slots. THREE. 1 PCIe 4×4, one 3×4 and one 3×2. Additionally slot 1 can be configured to use a U.2 NVMe too. The graphics are integrated UDH750 I think, but – and here’s something else that amazed me about this unit – it also comes with a half-length PCIe 3×4 slot! With it being half-length you’re limited by what you can put in there, but there’s certainly options out there.

    I was quite blown away when I saw the specs of these units, and couldn’t order one fast enough, and to spec it out. The spec I’ve gone for is:

    -> 96GB RAM

    -> 1 x 4TB NVMe

    -> 2 x 1TB NVMe

    This is connected now over 10Gbe for the main LAN, and 2.5Gb for my HyperV machines. Absolutely bonkers considering its size.

    What’s the performance like? Well, let’s look at the primary SSD to start. This is a Lexar 4TB 4×4 that I already had.

    Image shows the performance throughput of the SSD. 4170MB/s write, 4717MB/s read.
    SSD Performance

    That’ll do. The other two SSD are a bit slower at about 2200MB/s read/write, still really acceptable.

    The Intel 13900H in the MS-01 has a base TDP of 45watts but apparently can boost up to 115watts – it’s a mobile processor of course. By way of example, the desktop i-13900 has a base of 65W and boosts to 219W…but requires significantly more cooling.

    You can see the Geekbench benchmarks for the 13900H here. If you want a bit of a giggle here’s the comparison between the 13900H and the binned M4 Max (I have the unbinned M4 Max). So processor performance is pretty good too – certainly good enough for what I need it for.

    What about power consumption? At idle, the unit seems to average between 25 and 33watts, which is 0.6KwH to 0.8KwH per day.

    Image shows the power consumption of the MS-01 at 32w.
    MS-01 Power ConsumptionScreenshot

    This does seem a little high compared to what some other people are reporting – several are reporting idle figures of 15-30 watts, but I’ve not seen it go that low. Perhaps it’s the spec and of course I have the 10Gbe interface in use.

    What about under load? It seems to peak at about 115-120w but then settles in to about 90w. Assuming 90w consumption that’s 2.2KwH/day (rounded up), which isn’t insignificant, but then how often are you going to have it flat out..?

    Assuming you work it hard for 8 hours a day, but the rest it’s fairly idle, running costs at GBP0.29/KwH would be as follows.]

    Image shows the power costs of the MS-01
    MS-01 Power Consumption

    Just for the purposes of comparison – the M4 Mini I bought for my 24×7 automation stuff (Plex, backups etc.) averages about 5w at idle, and uses 65watts under full load.

    Image shows the power consumption of the Apple M4 Mini
    M4 Mini Power Consumption

    It’s a fairly decent difference isn’t it? Saying that, the M4 Mini can’t do all the x86 virtualisation that I need, but it’s still a reasonable comparison.

    So what do we have at this point? Well, we have a small, powerful PC, with great networking, internal storage, and reasonable power consumption. There must be some downsides, right?

    Well, so far, not too many. I do have some observations however. Firstly, noise. If these units are next to you, you will hear the fans. They seem to spin up even with the slightest of activity. They’re not particularly loud however you will hear them. They don’t bother me at all.

    I also have a weird issue with the networking. Firstly, those two 10Gbe SFP+ ports. If I try use both of them they work for a little while but eventually I start to get problems with disconnections and the performance plummeting. If I had to guess, it’s because I’m using SFP+ to RJ45 connectors in there, and they’re getting heat soaked – so in effect, I can’t use both SFP+ 10Gbe connections at the same time. Not a huge issue given it also has two 2.5Gb ports.

    Next is a weird one, and it sounds like a configuration bottle neck I’m hitting rather than an actual problem with the unit. With HyperV configured to use the 2.5Gbe interface only, and with management etc. on the 10Gbe port, I only get circa 2.5Gb performance not the 10Gbe port. In fact it’s so close to 2.5Gbe it makes me think this is a config issue. If I remove the HyperV configuration I get nearer the 10Gbpe. Something I’ll look into in the future I think, however it’s not that big a deal to me in reality.

    2025-02-25 20:32:04: I’ve now resolved this – it wasn’t just suspiciously close to 2.5Gbps it was 2.5Gbps…but it was reporting as 10Gbps. Essentially I was using a cheap non-managed 10Gb/2.5Gb switch, and an SFP+ to RJ45 converter on the MS-01. I swapped the switch for a 10Gbps QNAP managed switch and what do I see… the port running at 2.5Gbps. Swapping out the SFP+ to RJ45 connector and just using a straight fibre connection I now not only have the 10Gbps connection, it’s also running a lot cooler. I’ll test both 10Gbps connections shortly and see if running them both is workable.

    Image shows a file copy at 10Gbps speeds
    10Gbps File Copy

    What am I running on it? Well, my longer term plan is to configure it as ProxMox unit, for now however it’s running Windows 11 and HyperV. Not a great combination, but good enough for something I’m working on. I mean look what it’s running right now:

    Image shows the MS-01 running several HyperV machines
    HyperV

    That’s not too shabby is it?

    Oh while I remember, the unit also supports Intel vPro for remote management – this allows for remote control, including BIOS level KVM access. How cool is that? Very useful for when trying to manage the unit remotely, and far more like grown up server solutions. It’s pretty impressive.

    Costs

    Now on to the thorny issue of costs. These are not particularly cheap units. Let’s look at this build – although I did have the SSD kicking about already.

    Image shows the cost of the MS-01 built for my lab.
    Lab Build

    NOTE: For my US readers, the above costs include our sales tax (VAT) at 20%.

    So the cost isn’t insignificant, but for the capability I think it’s a bargain?!

    Support

    Now, this is worth being aware of. I’ve seen a few horror stories about dealing direct with Minisforum, and if it was anything like my experience dealing with Geekom I’d be very nervous about buying direct. Buy from Amazon however and any problems you can make their problem, and their returns process is usually excellent.

    What’s Coming

    It’s also worth being aware of the Minisforum MS-A2 that’s due for release circa April 2025. This has an AMND Ryzen 9 7945HX 16 Core/32 Thread processor in it which will be even faster than the i9-13900H, so perhaps it may be worth waiting for that? Then again if you’re always waiting for what’s around the corner you’d always be waiting for what’s around the corner.

    Summary

    I’m very happy with this Minisforum unit. The connectivity is bonkers, its power consumption reasonable, and its performance is certainly good enough for my purposes as a virtualisation lab host. I’d go as far to say as I’d be happy to use it as my every day machine should needs be, it’s that capable. It’s a little pricey – certainly compared to the Geekom, but then so far it hasn’t decided to become a paperweight – and there’s some concern about direct support from Minisforum, but so far, everything has been rosey and I suspect I’ll end up buying another one.

  • Just One More Thing

    Toward the end of 2022 I was finishing up a reasonably sized global project in Unified Comms – one that I’d been involved in (or around) since its inception. Looking back, I think earlier in 2022 I was feeling fairly burnt out by this project. I cover most timezones as I’m fortunate enough to be one of those people who doesn’t sleep that much, so I’d happily pick up the things I knew others would find difficult to accommodate. There was also a lot of detailed – but very cool – stuff that had to be done. Most tasks required input from several areas, all with shared but different priorities. Nothing new there – that’s just how mid to large businesses are, and knowing how to negotiate them is part of the experience.

    You know that phase you go through – the stage where the cool stuff is done (The why, how, what with, what’s our dangers/risks, how do we mitigate etc.) …you’re now just focusing on finishing up the project control items, and those less interesting but yet part of your deliverable elements. They’re important – I’m not suggesting anything else – but it’s the clear up after a job well done. It doesn’t feel quite as satisfying. 

    I’ve experienced this throughout my career, so have personal mechanisms to deal with it and keep things interesting. I do so by considering what I’m doing today and then thinking about how I could do it better. Also, I try and think how can I make this reusable. In this particular project I’ve ended up writing a few tools that I’ll certainly be using elsewhere. I’ve a decent library of reusable stuff stretching all the way back to …Netware. Wow. Automated Netware 4.11 Server Deployment anyone? Even has login scripts. That’s not really the point of this blog though.

    I’d decided that when this project finished toward the end of the 2022 I was going to give myself a few months off. I don’t mean a few months off doing nothing, I just mean stepping back from the usual level of effort and intensity to give myself some time to regroup, and to get ready for The Next Cool Thing. There’s always more cool things.

    It’s the bit that happens after the project completion, when you’ve finished those line items. It’s the post-project ‘just one more thing’. I’ve realised how immensely rewarding it can be. I’ve always been aware of this, but never really given it a lot of thought – until now.

    Those ‘just one more thing’ items tend to be advice and sometimes interesting left field bits of work. How would you deal with this? We have this weird issue any ideas? Hey, have you some pointers how we can get started with ….? Could you give us a hand with…?

    I think I’ve worked out why I like that part of a project so much; it’s because it makes you feel valued. Now, I’m sure many of you have already come to this conclusion however for people like me it’s a little bit of a surprise. I’ve never really been one to need positive affirmations or feedback at work – I’m freelance after all, and you’re judged by your work and I suppose by how much work you have. I prefer the delegation element of situational leadership rather than the supporting (as a team-worker) – tell me what you need, I’ll get it done. Negative stuff of course feedback is welcome – it helps us work better, and it gets us both better results. I may even agree with you. If I’m honest about such things however, I am the worst critic of my own work. The struggle between ‘perfect but not done’ or ‘done but not perfect’ is real – and perhaps a subject for another day. I’ve re-written that sentence several times as it sounds like a humble-brag – it’s genuinely not. I like things to be accurate. Anyone who has seen my dial-plans will understand this. There’s ‘works’ and there’s ‘right’. Sometimes ‘works’ is enough. 

    Paradoxically I can find the 3 months or so post the closure of a larger project thoroughly enjoyable and rewarding. It’s the ‘just one more thing’. 

    That one more thing is ‘we value your input’. 

    I’ve been involved with companies that are so invested in their feedback cycle and reviews that they don’t often see that they could do all that stuff – and get better results – by working on their day to day, rather than formal reviews. That isn’t my area however, so I’ll not offer much of an opinion there. I’m sure those formal reviews have their place, I just think those same companies are missing a trick by thinking they’re all there is.

    Anyway, I’m getting to the end of January and I’m now looking forward to The Next Cool Thing. My couple of months off didn’t really materialise although I did get a decent amount of down-time to regroup. I even got half-way through sorting out the hallway cupboard.

  • Synology DS923+

    I recently made the mistake of having the opportunity to play with some enterprise storage – lots of glorious NVMe etc. – and it got me eyeing my main storage NAS with disdain. WHY. I was happy with it before I started playing with the enterprise stuff….Anyway, I’ve got my hands on a Synology DS923+ unit with the following specifications:

    -> Synology DS923+

    -> 32GB RAM (Non-Synology)

    -> 4 x 16TB Toshiba ENT Drives

    -> 2 x Crucial 2TB NVMe SSD (Configured as a storage pool – more on that in a minute)

    -> 10Gbe NIC

    I’ve had some time with the unit and have now been able to test its performance, and assess its power usage. I’ve put it all together in a spreadsheet (because of course I have), and you can see that spreadsheet at the following link, including costs (and links to suppliers), performance, and power consumption.

    2023-01-03_Synology_DS923.xlsx

    The performance matrix is below.

    DS923+ Performance Figures
    DS923+ Performance Figures

    There are no great surprises in there – the unit nearly maxes out the 10Gbe connection on reads, and is about 80% of the way there on writes – that’s on both the NVMe Storage Pool and the spinning drives. On my previous QNAP unit over 10Gbe I was getting in the region of 3-400MB/s mainly due to the PCIe interface on the QNAP being a slower type than that on the Synology. So the performance is a fab upgrade for my usage scenarios. Even if you don’t have a 10Gbe connection, the units will easily max out the cheaper 2.5Gbe interfaces. It’s a little disappointing I think that even in today’s market Synology are providing the units as default with 2 x 1Gbe connections. You’d imagine 2.5Gbe would be becoming the norm now? The added E10G22-T1-Mini RJ45 10Gbe E (catchy name) adapter is another 150-200GBP but it does perform, and it supports 2.5Gbe & 10Gbe over RJ45. Note it’s a proprietary slot too so you can’t just pick up any 10Gbe Interface.

    I also tested the encrypted shares – as you can see, write performance is significantly impacted with an encrypted folder. Roughly there’s a 45-50% performance penalty when writing to encrypted shares. That doesn’t bother me too much, however bear that in mind if you want to encrypt everything.

    Another surprise was the support for storage pools on NVMe. There is however a huge catch – it’s only properly supported on Synology provided NVMe drives. I don’t have any of those, and they’re expensive and small, so wasn’t getting any either! Fortunately, there’s a way of configuring the storage pool with non-Synology NVMe via SSH. It’s not that hard, and I’ll write up shortly how to do it. The only downside I have seen so far is that if you lose a SSD to failure you have to reconfigure the whole RAID again via the command line. That’s one of the reasons I’m using RAID 0 (with no protection) – I figure if I’m going to do it anyway in a failure scenario then I’ll just recreate. I protect the contents by using snapshot replication to the spinning drives anyway, and there’s nothing time-critical on those units, so it’s all good. One thing to be aware of though! 

    Similar with the RAM – I’ve used third-party Kingston RAM as it’s cheaper than Synology (and I could get hold of some). It works just fine, however you do get a warning that you’re using non-Synology RAM. A little annoying but hardly earth shattering. It’s interesting that they’re now going with the ‘supported’ devices approach – the drives I have aren’t on their list either, but of course work just fine.

    So far I’m impressed with the unit. It’s quiet, fast, and the power usage is acceptable. Spin down on the Toshiba units works as expected – I’ve moved most apps that would be ‘live’ to the SSD to reduce dependency on the spinners unless I’m using them.

    How much do they cost to run? Well, I’ve worked that out for you too. Have a look in the spreadsheet above on the ‘Operational Costs’ sheet. Snapshot below – click on it for a bigger version.

    DS923+ Running Costs
    DS923+ Running Costs

    Based on my usage profile and electricity tariff the unit costs about GBP14/month to run, or GBP168 a year. Considering everything I get the unit to do – it’s got a ton of apps on including a couple of virtual machines, my media storage, cloud sync and my backups – that’s pretty reasonable. You can adjust the percentage spun down, idle and under load to get figures nearer what you’d expect for a consumer unit – from what I’ve read, consumer numbers are nearer 40% spin down, 40% idle, and 20% load. Apparently – that sounds low to me? If your usage is that low why would you have one?!

    Anyway, any issues? Well, transcoding Plex is problematic on these due to the lack of hardware encoding and Quicksync. Fortunately I don’t do that – my Plex is on an older Mac mini that does several other things and that handles Plex just fine. If you have a dependency on Plex, perhaps consider a unit more suitable.

    So far however, this unit is an absolute flyer. Fast, and that DSM OS is a joy to use. It is annoying you have to pay extra for the faster ports (10Gbe), the lack of support for storage pools unless you’re using Synology SSD, and the annoying message about the drives not being on the supported list….The ports they definitely should address – 1Gbe in today’s market just doesn’t cut it in my opinion.

    Anyway, some nerd info for you.

    Happy New Year and all that!

  • The Art of Technical Documentation

    I got asked something the other day, that while a simple question, took me some time to ponder and fully answer. Why do I produce so much documentation for stuff?

    First, some context. I spend most of my time designing, deploying and even sometimes – operating – mid-market IT systems. Mostly Office365 and Unified Communications areas, as I find them the most interesting. By mid-market I tend to focus on the up to 10k seats area. Mainly as I find that’s a size that’s manageable and you spend most of your time actually doing the interesting technical stuff rather than lots of incremental minor things due to risk management. I have worked on big stuff – some of it properly big (400K+ users) – and while that has it’s own attraction and interest, the mid-market I find more interesting.

    Over the years I’ve developed a solid strategy for dealing with complex systems, and managing change. I don’t mean the formal change management processes – they of course have their place – I mean how *I* manage what I’m doing, how I establish the outcomes I want, and how I get there. This is the bit that tends to produce a lot of documentation.

    Let’s imagine we have a requirement to implement a reasonable amount of change for a system on a remote site. My documentation process will follow this methodology:

    • The why and the what. Here it’s a brief to ensure that everyone understands what’s being asked for, why it’s being done, and what the expected outcomes are. This has been vital in catching some misconceptions/differences between people.
    • The how. This details how I’m going to achieve what is being asked. All touch points, impacts, expected outcomes etc. Also includes how we roll back. This is often included in the change management process.
    • The doing. What happened when we implemented? This is great for lessons learned and for future similar projects.
    • The final state. Sometimes referred to ‘as is’ documentation.

    This feels like a lot doesn’t it? It really isn’t in the real world. I’ve been following this process for so long that it’s second nature to me. It has some very specific pay offs too. I have a complete audit trail of what was asked for, what was agreed, and what happened when we did The Thing. That’s come in very useful in the past.

    Do you know what’s really important about the process though, that I think is often missed? This process helps me vastly reduce risk in increasingly highly complex systems. This has been a developing area in the world of technology (Complexity Theory), with a key part of it being Systems Theory. Understanding how one system is dependant on another, and affects several others – it’s challenging. 

    This documentary process then – for me – is a lot more than just the documentation. It’s a process of fully understanding a requirement, establishing how we get there, and then helping the people running the systems also have a handle on it after we’re done. The process is arguably just as important – if not more so – than the documentation itself.

    This is the bit I find interesting, and it took me some time to explain. 

    The pay offs from this process are several. From a freelancer perspective, this makes me reliable. Typically I achieve what I am being asked to achieve, and usually in the manner we’ve planned and agreed. Another key pay off, is that it makes me look at the technology I deal with in a lot more detail than is perhaps necessary for specific items. That always aids understanding, and that is never a bad thing in technology.

    Anyway, a simple question, not such a simple answer. Writing good documentation is also challenging in the technical space as you have a wide range of readership to consider, but that’s a subject perhaps for another day.

  • People who say nope. nope. nope. nope.

    I’ve been thinking about approaches to technical issues recently – large issues rather than small ‘my laptop is broken’ type ones, and it’s got me thinking about people’s approaches to complex problems. There seems to be two types (I’m sure there’s many more, but I’m going with two for this).

    1. Keen to work a way to a solution.

    2. Nope. Too hard. Risk. Nope.

    The former one – people like that I enjoy working with. Working out the correct way to address problems and actually getting things done is a skill in itself. One I’ve spent a lot of focus on personally, as I think it’s one of those things that separates techs. It’s not just about the technology, it’s about getting a business requirement done. Knowing the correct balance between perfect/not done, and done but not perfect…..well, it’s a skill.

    What about the second type? Well, I find these groups incredibly difficult to work with – but it always ends up the same way. Let’s consider what I’m talking about with an example:

    We have three blocks, numbered 1, 2 and 3. The requirement is at least ONE of those blocks has to be in that box over there. Let’s look at how this goes:

    -> OK – let’s put block 1 in the box!

    —> NOPE! I need that, I like its colour.

    -> OK – 2?

    —> NOPE! Thursday.

    -> 3?

    —> We couldn’t POSSIBLY do that.

    So we have a defined requirement, and yet that requirement you have made impossible. What tends to happen is this goes around and around for a bit until somebody gets thrown off the roundabout in frustration. Typically, it then gets escalated to somebody who has some say in stuff.

    • BOSS: Right, we’re putting block 1 in the box. There are some issues to fix and some pain, but it’s less pain than not having the stuff in the box.
    • Everyone: OK, let’s do that.

    Tech – Facepalm

    The thing is with this approach is it damages relationships as it’s so exhausting. It’s why I much prefer working with your Type 1s. They work through things, they get stuff done. Type 2s? They just end up being told they’re wrong. Pretty much whatever they do, they’re going to be wrong. How people work like that is beyond me, it must be utterly ruinous.

  • Phone Number Hijacks/Spoofing

    I’ve seen of a couple of instances of phone number hijacking again recently – typically WhatsApp – but you can also see it with services like Skype (Consumer) and the like. What am I talking about?

    Well, let’s consider Skype (Consumer). When I make a call from my Skype client it actually appears on the remote end as to be from my mobile. When I set this up, I have to enter my mobile no. I then get a code on my mobile which I have to enter to show that I own that number. When I do, I can then make calls with an outgoing number of my mobile.

    Spotted the hack yet? Get that code, and you can make phone calls appearing from my mobile.

    Imagine you’re selling something and the buyer is reasonably wary. Conversation goes like this:

    Buyer: I want to make sure you are who you say you are. I’m going to text you a code and if you can tell me what code is we can continue.

    Innocent: Sure! 

    Innocent: Gets code, sends it to buyer.

    *BOOM* buyer now can make phone calls appearing to be from your mobile.

    It’s a similar hack with WhatsApp. Just replace being able to make calls to owning your WhatsApp account.

    Be very wary of telling people these codes. Make sure you trust the service asking for a start. Here’s a real example:

    Example hijack WhatsApp
    Example hijack WhatsApp
  • Protecting your data

    A friend was after some general advice on storage/availability and backups for home stuff so rather than just reply specifically to him I thought I’d put my general thoughts together on non-professional data protection. The stuff in my day job is generally focused on 99.999% (5 nines) availability so the backup and availability strategy is usually beyond what you’d want to do for home.

    The questions are usually around should I implement RAID? How do I backup? What should I backup? Where to…? To answer these questions you really need to look at fundamental aspects of your data protection strategy, and they are:

    • Recovery Time Objective (RTO). How long would it take to restore you data, and how much of a pain would it be not having that data for that recovery time? There’s also the question of effort to restore data, but that’s a softer consideration – if you’re busy, this can be a significant added burden to any potential restoration process – arguably this massively increases the pain of not having your data for that recovery time.
    • Recovery Point Objective (RPO). How much data can you accommodate losing between backups? For static stuff that doesn’t change often for example you may only backup once a week or so.

    From a general data protection point of view, the 3-2-1 backup strategy is the one most talked about. What this means is:

    • You have three copies of your data at any one time.
    • Two of them are on physical different devices.
    • One of them is away from the main premises you use – I.e. Off-site storage. 

    Considering the above is how I would come to a backup & data protection strategy. A couple of quick points:

    • RAID is not a backup. Using RAID is a strategy that affects your RTO and RPO. Lose a RAID array and you’re in trouble aren’t you? Having RAID does not affect the 3-2-1 strategy, it is an availability technology, nothing more. It vastly reduces your RTO & RPO. Lose the array with no backup then your RT & RP become infinite….
    • Automation is key to a good-backup strategy. If you have to do something manually, the one time you think you’ll be fine is the one time you’ll be crying in to your soup.
    • You may want to consider have a second off-site copy. Why? Well, consider ransomware protection. If your backup solutions are automated to the cloud for example, there is a (albeit remote) possibility that your off-site backups also get encrypted with Ransomware. To see what I mean in a bit more detail, have a look at my video here. RansomWare – Protect Your Stuff!

    So, in reality, what would a backup solution look like?

    • One device with live data.
    • One device with a copy of your live data.
    • One off-site copy.

    So where does the RTO and RPO come in to it? Well, it comes down to how quickly you need your data backup, and how much you can lose. Traditionally, most systems would backup every evening (often using a Grandfather, Father, Son scheme) and this will probably be enough for most home systems. What’s the worse case here?

    Let’s say you backup at 23:00 overnight. One lovely Friday at 22:59 your main storage blows up/gets flooded with milk (don’t ask). Well, you’ll have lost all of your data from 23:00 the previous night to 22:59 on the day of the milk issue. That’s your Recovery Point.

    Next, you need to consider how long it takes to restore your data – that’s your recovery time.

    Where does RAID come in to this? Like I say, this is an availability consideration, not a backup. If you:

    • Have a good backup system that’s automated and backs up to another device every night.
    • Would be OK with losing 24 hours of data.
    • Would be OK with the time it takes to get access to your data….

    …. Then what will you gain from RAID? Not a lot really. However consider that you may want everything to just carry on working even in the event of a drive failure – that scenario RAID is a great help. You can take a drive failure and carry on as you are and replace the drive. Note you’re still backing up at this point.

    When considering your backups from device one to device two, do you just want them to be exact replicas? There’s danger in this. Imagine corrupting some stuff and not realising. You’ll end up with the corruption duplicated on to the other devices, and your off-site backup. This is where having the Grandfather, Father, Son mode of historical backups come from – this takes more automation to achieve, and you may of course consider it well beyond the requirements for home. 

    So…do I need RAID? It’s not as simple a question to answer as may first appear is it? Personally I think that anything keeps your data available, and avoids me having to resort to backup systems is absolutely worth it. You really want your backup system to be a ‘last resort’ type thing, so in reality I always tend to RAID my stuff. This is where NAS devices come in by the way – not just for their RAID systems but also for their in-built backup solutions. Let’s take how I used to use my Synology stuff (I’ve upgraded now for 10Gbe and I have a ridiculous internet connection so rely on Azure stuff a lot more now):

    Primary Device

    Synology 918 with 4 x 12TB drives giving about 36TB available.

    Secondary

    Synology 416 (I think) with 4 x 12TB drives giving about 36TB available.

    Overnight the primary device is backed up to the secondary, and it has a 12 month retention – I.e I can go back up to  pretty much any point in the previous 12 months. In addition to that, live data that changed often was snapshotted from the primary to the secondary about 4 times an hour.

    Finally, the secondary Synology also sync’d those backups to an off-site hosted solution. 

    Probably way over the top however the principle can be easily replicated without all that expensive gear.

    Primary Device

    2 x 6TB drives, mirrored, so 6TB available. If you get a drive failure your data is still available, and you can replace the drive.

    Your primary device also replicates your data to cloud storage.

    Secondary Device

    A 6Tb or larger hard disk with point in time incremental backups of the primary.

    Far smaller, but with the same principle, and you get the same range of dates for your recovery point (I.e. You can restore back to a point in time.

    Told you the question isn’t as simple as you’d imagine.

  • Is this a business purchase Sir?

    I’ve noticed recently that whether online or in real stores like PCWorld/Currys, I’m being asked a lot more whether my purchase is for business use or not. 

    “Is this a business purchase, Sir?”.

    They laughably refer to ‘giving you a VAT receipt’ which of course you should be entitled to anyway.

    The cynic in me thinks there is only one possible reason this is happening more and more – to get out of consumer protection laws – distance selling rights for example. Here’s the thing – if you say it’s a business purchase very few of these rights then apply. How crafty is that?

    Anyway, here’s the point – if you ever get asked if it’s a business purchase always say no. You’ll retain far better consumer rights, and lose nothing in the process.

  • VMWare Fusion 11.0 – It’s a mess

    The arms race between Parallels Desktop and VMWare Fusion has continued with the recent release of Parallels Desktop 14.0 and even more recently VMWare Fusion 11.0. I use both products – typically VMWare for my server stuff, and Parallels for desktop type setup (Think Office/Outlook/Windows 10).

    I’ve upgraded my Fusion to version 11 – and oh am I regretting it. There’s tons of problems with it:

    • Wow it’s slow compared to Parallels
    • I can’t run ANY virtual machines if I’ve previously run Parallels or VirtualBox
    • The network performance is all over the place.
    • Did I mention how slow it was? Startup/shutdown & Snapshotting.

    I’ve tried this on multiple machines, and all with similar results. The most irritating one is that if I try and use VMWare Fusion after having either Parallels or VirtualBox running, I get an error saying ‘Too many virtual machines running’. The only way I seem to get around it is by rebooting and not using Parallels or VirtualBox at all. It’s infuriating.

    I’m sure VMWare will iron out the issues, but for now it’s going in the bin and I’m going to ask for my money back.

    Video below shows the general performance and issues in more detail.

  • Upgrading the ram in an Apple iMac Pro

    One of the physical differences between the 2017 5k iMac and the 2017 iMac Pro is the RAM upgrade process. In the normal 5k unit there’s a small door at the back that grants you easy access to the RAM slots – you can upgrade the RAM yourself, and very easily.

    With the 2017 iMac Pro, the RAM is upgradable, but you cannot do it yourself. Well, unless you’re quite brave with an expensive machine – you have to dismantle it. For anyone who’s ever dismantled and iMac, it can be quite challenging.

    Anyway, if you look at the Tech Specs for the iMac Pro, you’ll see the base RAM is 32Gb, and it’s configurable to 64Gb or 128Gb. The units have four memory slots in them:

    iMac Pro RAM Slot Layout

    Notice the ‘memory upgrade instructions’? That just takes you to the Apple support pages. In addition, you can see the memory specifications here: 

    iMac Pro memory specifications

    Note this bit:

    iMac Pro RAM Details

    In effect, an Apple Store can deal with Warranty issues. If you want the RAM upgrading however then you have to visit an Apple Authorized Service Provider (AASP). Anyway, I could not find this information originally, and it’s seriously making me question whether this was the way it was worded in the first place. But hey, what can you do.

    When I bought this iMac Pro, there were quite significant delays on getting the units, especially with any customisation. After speaking to Apple, they suggested buying the 32Gb unit and bringing it back to have the RAM upgraded. Simple you may think.

    Twice I took the iMac Pro to my local Apple Store. Twice I regretted not remembering that the box handle isn’t strong enough to carry the weight of the machine, but that’s a different story.

    The first time I attended they struggled to find any information on the upgrade process, and suggested that as the units were so new, and so different, they wait a while and try again.

    So I did wait a while. Approximately 6 months. 32Gb of RAM wasn’t terrible in the unit for the uses I had, however now I was struggling, so it needed upgrading.

    This time, rather than placing my trust in the Genius Bar, I contacted Apple via their support telephone no, and was referred to their online chat (!) as it could take a while to work out. Fair enough. I think spent some time with the online chat people who were very helpful, and arranged a visit for me to my local Apple Store to have the RAM upgraded…..and this got complicated.

    When I turned up at the Apple Store there was much ummmm and ‘well, we don’t know the process for this…’. I was fairly insistent this time, given it was second trip and the fact I’d verified the process with Apple Support first.

    They did the right thing by suggesting I leave the machine with them if I could – fortunately I have other kit and just needed it done, so happily left it in their capable hands.

    They called me back when they said they would – am I the only person that thinks such small points make a huge difference to your perception of service? Whomever put those reminders in their service system needs a commendation – this has always been my experience with Apple.

    Anyway, the result of the call with them was a bit….Interesting. They had no process to upgrade the RAM, and now they were pushing all the upgrades to the AASP. You an feel my groaning at this point…Have to go pick it up, take it somewhere else etc. Etc. It was a bit frustrating to be honest – you’d expect them to know their processes.

    This is not however what happened. Apple twice recently have surprised me with their level of service. What did they do? They ordered me a replacement unit with the specification I actually wanted, and replaced my original unit, with the idea being I simply pay for the upgrade.

    That was a great outcome for me. Admittedly had to wait for a couple of weeks for it to turn up, but no real drama with that, I have other equipment to use.

    Weird experience isn’t it? I get the iMac Pro units may be a bit unusual, but I kinda thought the Apple Stores would be a bit more on top of how they deal with such things? The final outcome for me though was an effective one, and one that surprised me. Why, I’m not sure, as I’ve only ever had excellent service from Apple.

    Anyways, I’ve now got enough memory. For now. 

  • When is storage not storage?

    When is plain old storage not plain old storage? When it’s Network Attached Storage (NAS) that’s when. 

    I don’t tend to delete stuff, as storage is relatively cheap and I usually find that if I delete something I, at some point in the near future, will be irritated that I’d deleted said thing. I have my email archives going back to 1999 for example. Yes, yes I know. 

    I’ve always shied away from network attached storage. Every time I’ve looked at it I’ve been caught by the network transfer rate bottleneck and the fact that locally attached storage has for the most part been a lot quicker. Most of my kit is SSD driven, and volume storage was fast Thunderbolt type. Typically I’d have a tiered storage approach:

    • Fast SSD for OS/Apps/normal day to day stuff.
    • Thunderbolt 3 connected SSD to my virtualisation stuff.
    • Spinney ‘volume’ storage.

    The thing is, my storage was getting into a mess. I had loads of stuff connected. About 12Tb off the back of my iMac#1 (My virtualisation machine, for stuff that’s running all the time), and about another 15Tb off the back of my everyday iMac Pro. That’s a lot of spinner stuff. Trying to ensure important stuff was backed up was becoming more and more of a headache. 

    So, I finally gave in, mainly due to the rage around cabling more than anything else, so I started investigating a small server type setup, but then it occurred to me I’d just be moving things about a bit, and I’d still have a ton of ad-hoc storage….So I started investigating the Network Attached Storage devices from the likes of Synology and QNAP.

    Oh my, how wrong was I about NAS units. They’re so capable it’s ridiculous, and they’re not just raw storage. I have a few of them now, and they’re doing things like:

    • Storage because storage because I needed storage
    • A couple of virtual machines that run some specific scripts that I use constantly.
    • Some SFTP sites.
    • A VPN host.
    • Plex for home media.
    • Volume snapshots for my day to day work areas.
    • Cloud-Sync with my DropBox/OneDrive accounts.
    • Backup to another unit.
    • Backup to another remote unit over the Internet (this is more of a replica for stuff I use elsewhere really).
    • Backup to a cloud service.

    I did run in to some performance issues as you can’t transfer to/from them faster than the 1Gbps connection – which is effectively around 110MB/s (Megabytes per second) – so 9-10 seconds per Gigabyte. My issue was that I had other stuff trying to run over the 1Gbps link to my main switch, so if I started copying up large files over the single 1Gbps links from my laptops or iMac(s) then of course everything would slow down.

    That was fairly simple to fix as the Synology units I purchased support link aggregation – so I setup a number of ports using LACP link aggregation (Effectively multiple 1Gbps links) and configured my main iMac machines with two 1Gbps link-aggregated ports. Now, I can copy up/from the Synology NAS units at 110MB/s and be running other network loads to other destinations, and not really experience any slow downs.

    Just to be clear – as I think there’s some confusion out there on link aggregation – aggregating 2 x 1Gbps connections will not allow you transfer between two devices at speeds >1Gbps as it doesn’t really load balance. It doesn’t for example send 1 packet down 1 link, and the next packet down the next. What it does is works out which is the least busy link and *uses that link for the operation you’re working on*.

    If I transfer to two targets however – like two different Synology NAS units with LACP – I can get circa 110MB/s to both of them. Imagine widening a motorway – it doesn’t increase the speed, but what it does do is allow you to send more cars down that road. (Ok, so that often kills the speed, and my analogy falls apart but I’m OK with that).

    I can’t imagine going back to traditional local attached storage for volume storage now. I do still have my fast SSD units attached however they’re tiny, and don’t produce a ton of cabling requirements.

    I regularly transfer 70-100Gb virtual machines up and down to these units, and even over 1Gbps this is proving to be acceptable. It’s not that far off locally attached spinning drives. It’s just taken about 15 minutes (I didn’t time it explicitly) to copy up an 80Gb virtual machine for example – that’s more than acceptable.

    The units also encrypt data at rest if you want – why would you not want that? I encrypt everything just because I can. Key management can be a challenge if you want to power off the units or reboot them as the keys for the encryption must either be:

    • Available locally on the NAS unit via a USB stick or similar so that the volumes can be auto-mounted.
    • or you have to type in your encryption passphrase to mount the volumes manually.

    It’s not really an issue as the units I have have been up for about 80 days now. It’s not like they’re rebooting every few days.

    The Synology units have an App Store with all kinds of stuff in there – properly useful things too:

    Synology Home Screen

    Anyway, I’m sure you can see where I’m going with this. These units are properly useful, and are certainly not just for storage – they are effectively small servers in their own right. I’ve upgraded the RAM in mine – some are easier to do than others – and also put some SSD read/write cache in my main unit. Have to say I wouldn’t bother with the SSD read/write cache as it’s not really made any difference to anything beyond some benchmarking boosts. I’d not know if they weren’t there.

    I’m completely sold. Best tech purchase and restructure of the year. Also, I’m definitely not now eyeing up 10Gbps connectivity. Oh no.

    As a quick side-note on the 1Gbps/10Gbps thing – does anyone else remember trying 1Gbps connectivity for the first time over say 100Mbps connectivity? I remember being blown away by it. Now, not so much. Even my Internet connection is 1Gbps. 10Gbps here I come. Probably.

  • Faster Internet – FOR FREE!

    Faster Internet – for FREE! Can you really get something for nothing? Well perhaps not, but there are things you can do to both optimise your Internet connection and protect your usage.

    What do I mean? Well, given most of my readership tends to be techy in nature, I’m not going to go in to massive amounts of detail, but in effect every Internet Provider tends to assign you their DNS servers…and these are usually far from optimal. A lot of techs I know then default to switching to Google’s DNS (8.8.8.8 and 8.8.4.4) because they’re pretty quick.

    Yes, they’re pretty quick…But you’re gifting Google with the ability to know every URL you resolve to an IP address. If you’re comfortable with that then fair enough – I’m not, however. Google makes me uncomfortable from a privacy perspective.

    So, let’s look at Cloudflare. Many of you will be familiar with them with their Web Caching technologies, but few seem to be aware they also have DNS servers available – 1.1.1.1 and 1.0.0.1. Cool addresses hey? Forgetting the cool addressing, just look at the performance – they’re properly fast.

    There’s various DNS benchmarking tools out there – OK, they’re not the most interesting of tools but they do give you decent information. Consider the performance difference between the Google servers and Cloudflare:

    DNS Performance

    As you can see, in other than localised cached versions CloudFlare nails the performance – and the reliability – in all the required areas. I know it doesn’t look like much, but the differences add up, and you can feel the difference.

    What about the privacy thing about the provider knowing everything you do? Well, I suppose there is an element of me just not trusting Google – any company that needed a tag line of ‘don’t be evil‘ has issues – CloudFlare do seem to have a defined policy of never storing your queries on disk, and being audited to ensure this is true. Apparently. I have no proof this is true, beyond stated policy.

    Anyway, you can read the launch blog here:

    Announcing 1.1.1.1: the fastest, privacy-first consumer DNS service

    I’ve been using the service for a while, and it is faster than the ones I was using previously, and by some margin. The privacy element is a nice cherry on the cake.

    The future expansion plans to cache more can only provide better response times you’d hope.

    Oh, as a funny and slightly bizarre side-note, some ISPs won’t actually be able to route to 1.1.1.1. I’m sure they’re working on resolving that – it’s easy to check if you can use the system simply by firing up nslookup (whether in Windows/Linux/MacOS) and then selecting the server with ‘server 1.1.1.1’ and seeing if you can resolve any addresses:

    NSLookup Example

    How you implement this for your Internet connection varies – on my platform for example I have my own DNS server that caches stuff, so I just CloudFlare as a resolver to that. You can also update your DHCP on your router to issue 1.1.1.1 and 1.0.0.1 – that’s probably the simplest way of doing it for most people I imagine.

    It really does make a difference.

    Honest.

  • The Curse of Monthly Subscriptions

    2018-05-25 Wow, fastest disagreement ever on a blog post. Some of the pricing below (like Spotify for example) are student pricing as apart from my day job of not being Batman I’m also a student. Also this year’s Prime will drop in price because of that too.

    ====

    It’s no secret that a lot of tech industries have moved to a subscription models where you pay a monthly fee to access their product and services. So much of not a secret that it’s dull, so I’m not going to talk about that too much.

    What I’m going to talk about however is how cheap things can all of a sudden make you realise you’re paying a ton of money for things that previously you didn’t pay a ton of money for. All the small things add up don’t they? 

    I’ve been trying to rationalise my subscription services as I’ve got a lot of cross-over in some areas, and it’s made me realise how much I’m actually paying for stuff. Things like:

    Monthly Subs Costs

    For the sake of transparency of highlighted my work stuff in green – slightly differing cost there as no VAT, and they’re tax deductible expenses (I think). 

    Anyway, they do add up don’t they?! I’m going through the process of rationalising stuff as there are some obvious cross overs there. Audible for example I’ve now got enough credits to listen to audio-books until my ears bleed, and yet I keep amassing more. 

    Cancelling Audible is always interesting – here, have it for half price! What I actually read is that they’ve been over-charging me since day 1.

    The cloud backup service, dropbox and Offic365 all have cloud storage in them, and more than enough, so why do I have all 3? I suspect when you see the low monthly cost combined with the effort involved you think meh, what the hell. Then you’re invested. They’ve got you.

    Zwift I don’t use in the summer either, I go outside like a normal person. So why am I paying for that?

    The push to subscription services can really alienate me as a consumer. Take two companies I used to have a lot of time for – now, I wouldn’t touch their products if you gave them away with a free alpaca:

    • 1Password
    • TomTom

    What did they do that was so terrible? Well, they tried to force me to dump my initial capital investment and switch to their subscription model. 1Password for example I had invested fairly significantly in – multiple clients, MacOS, Windows, phones etc. and I liked their product. Then a push to the subs and a lack of updates to the normal capital purchase clients. It felt like a complete stitch up. Like I say, now, whenever anyone asks about password managers instead of recommending a company the first thing I say is ‘don’t use 1Password’.

    Same for TomTom. I paid a fair amount of money for their product on my phone. Next thing I know is oh, we won’t be updating that any more, you have to buy a subscription to get access to ‘miles’. Yeah, how about no? I’ve already bought a product I expected to work.

    Just to be clear, I understand that products have a life-cycle. I expect to pay to upgrade when necessary. I also expect however to have the opportunity to make a choice on whether to upgrade based on a balance of costs against new functionality. What I don’t expect is the products I’ve purchased to not have the functionality I originally procured unless I then upgrade to their subscription. Yes TomTom, I’m looking at you.

    Some services of course I think are an absolute bargain. The core Office365 offering (I use multiple E3 for my work) and I think for what you get it’s an utter bargain. The phone system….not so much. It’s expensive for what it is.

    Aaaanway, monthly subs. Look cheap, and they’re not.

  • 1Gbps Fibre Internet Connectivity

    I live in Central London, and one of the benefits of living here is that the Internet connectivity has always been relatively good. I’ve had 200Mbps down/20+Mbps up for quite a while, and you get used to such performance. The upload speed always seems to be the restrictor though, doesn’t it? Makes things like uploading YouTube videos, and online backups (for example) something you set off, and leave to it. 

    Anyway, I got the opportunity to upgrade to 1Gbps Fibre connectivity from a company called HyperOptic. That’s 1Gbps up and down. I saw it was available, and given the cost at circa GBP50/month, I thought it was a no-brainer. I also kept my existing provider, as something so cutting edge may perhaps not be reliable. I tend to adopt stuff early, and sometimes that can cause some headaches – I couldn’t afford any issues with my Internet as I rely on it so much. So much so I’ve had more than one provider for as long as I can remember. In fact, I wrote about load-balancing multiple connections here:

    Load Balancing Broadband Connections

    So I’ve had the service for a good few months now, so thought I’d share my views. 

    My opinion is a simple one – just wow. It’s been rock-steady from a reliability point of view, and the performance just changes the way you use things. Uploading/download stuff feels much the same as it would as if I were operating over a local network. Just simple things like Windows Insider Preview updates come down in no time at all. It just makes my remote working life far, far easier.

    Online backups now are far more tenable than they were. I use BackBlaze, and I’ve found their service very fast, and utterly reliable. You add in a decent connection and all of a sudden I can now backup all of my important stuff and not give too much thought as to how much I backup off-site. I wonder with the advent of such Internet connections whether services like BackBlaze, who offer unlimited backup, may have to reconsider the price point/offer? Mozy went through that pain, and I believe Crashplan has now pulled out of the consumer market? I shall watch that space. I’ve a ton of cloud storage with my Office365 tenancy and my DropBox area so was considering jumping to that, but the BackBlaze service is just such a configure-and-forget platform it’s well worth the money. Backups you have to do sometimes just don’t happen. Automating those backups is the winner here.

    Anyway, performance. Below is the current performance I’m seeing using the DSL Reports Speedtest. I’m connected over 1Gbps wired-Ethernet at this point. 

    Wired

    Seriously, how impressive is that? For comparison, this is the same test but over WiFi.

    WiFi

    In this case, it’s my WiFi not keeping up with the connection – which is fair enough. I use a Netgear R8000 AC3200 router and am in general very happy with the performance.

    Anyways, want to see this stuff working? There are a couple of videos below you may find interesting. I’m incredibly satisfied with the service. Fantastic performance, and a very reasonable price – what more can you ask for?

    Oh – as a quick side note – if you do consider the service remember they use Carrier Grade NAT (CGN). CGN can cause issues with stuff like VPNs and the like. My work VPN won’t work through it for example. They can give you a static IP address too – I spoke to them online about it, and it was sorted in minutes.

  • Unified Communications – Why so hard?

    Quite a while ago I wrote an article on why I like working in the Unified Communications field – you can see it here:

    Why UC?

    It was an interesting conversation at the time going through the reasons that the technology kept me interesed. There is also of course a flip side to this – why is deploying a Unified Communications platform so hard? Or rather, why do so many organisations deploy UC platforms and have trouble with the process.

    It’s an interesting question, and one with many answers. In my working life I typically get involved with two types of organisations and deployments, with these being:

    • Organisations who want to deploy the technology, but are not quite sure how to approach as it’s not really in their internal skill set.
    • Organisations that give the technology to existing technology teams and ask them to get on with it.

    (Obviously there’s many other scenarios, usually somewhere between the two mentioned above).

    In effect, you’re either there at the start, or engaged later to pick up the pieces. From a technology perspective, you can understand why organisations take both of these approaches. Some are either a little more risk averse, or simply don’t have the internal time bandwidth for such projects – this tends to be the key feeder for the first scenario in my experience. The second scenario has a more varied set of drivers – the more common one is where an organisation does have a great internal team, and that team is keen to get involved in the newer technologies.

    So why is deploying Unified Communications technologies so hard…? Ask that question from 20 people in the field and you’ll likely get at least 27 different answers. For me, I think the answers seem to be different depending on who is actually answering the questions. Technology type people see it as a learning curve – and an enjoyable one, for much the reasons I highlighted in my article Why UC? The problem is with this approach is that while the needs of the technical teams are being met, the needs of the users are not. You’re deploying front-line tools often using people who are learning on the way. 

    Deploying UC stuff requires an understanding of the technology at a far deeper level than a lot of other user-facing platforms. Let me put it another way – when deploying stuff like Exchange the platform can be a bit more tolerant of configuration issues than a lot of UC platforms. This tolerance is not really a technical one, it’s more around the impact on the users. Get Exchange not quite right and you’ll have some annoyances and feedback from the users about those issues, but in general the platform will operate.

    Get a UC platform wrong (I.e. Telephony etc.) and my, you’ll be in a world of hurt as those users make their frustrations known to you.

    I think the ‘why so hard’ question is an interesting one, and it’s not one specifically answered by the technology itself. The real reason it’s so hard to deploy well is out there in some of reasons to deploy the technology in the first place: Enabling a user to change how they work.

    That may take some explanation. You want to give your workforce modern and enabling tools to get their job done, get it done well, and to, well, enable them to be more successful. The way you do that is implement technologies that enable change the way they work. The problem with this is of course is that if you give them tools that ‘don’t quite work’ you’re not enabling them, you’re putting them at a disadvantage. The next thing you know you’ve got unhappy users that for whatever reason can’t get their screen sharing, or their conference calls (for example), working. 

    Some of the elements of UC platforms that make it great for working on, can also make it difficult to deploy, and to deploy well. Getting the tools out to the users in a way that’s functional, and works well every single time, is absolutely key to a great deployment. A deployment that your user estate will genuinely thank you for deploying. How often does that happen? Going back to the two scenarios I mentioned earlier:

    • Organisations who want to deploy the technology, but are not quite sure how to approach as it’s not really in their internal skill set.
    • Organisations that give the technology to existing technology teams and ask them to get on with it.

    Using the above scenarios, typically I’ll see that one line of engagement results in a positive experience where the users are effectively bought on the journey of the new ways of working. The other one often involves climbing a mountain as the user’s perception of the platform is already tainted.

    UC stuff can be challenging to deploy. Make it work across multiple devices, from anywhere, and in a consistent and repeatable manner requires attention to detail on how platforms are designed to operate. It requires experience – experience such as knowing which certificate providers can cause you issues with other organisations, experience on providing media quality over congested networks for example. Getting input from people that do this as their day job can only be a good thing in my opinion.

    Having to work back through existing deployments that ‘don’t quite work as expected’ is probably around a third of my day job. What’s interesting is it’s always similar problems you see on such sites – similar ones that could be avoided. What kind of things? Well, like misunderstanding how cores work on Lync/Skype is quite a common one. Firewall rules are another. As is not really understanding the roles of QoS and Admission Control.The most common? Probably certificate misconfigurations.

    I’ll finish up by saying that user experience is absolutely at the centre of UC deployments. Lose the users early on, you’ll have an uphill battle on your hands. How do you ensure consistency of the user experience? My best advice would be to have resources at hand who have been there, and understand the underlying technology you’re working on, whether that be Cisco/Microsoft etc.

    Get it right, and your users will love you for it.

  • The Arrogance of Success

    I’ve today had to spend time moving my password management solution from 1Password to another vendor. I won’t say which vendor I’ve moved to, as the idea makes me a bit uncomfortable. Anyway, the reason I’ve moved got me thinking about other vendors I’ve used – and loved – in the past, but then ended up really, really disliking. The process is usually a similar one, and it goes this way:

    • Vendor produces a GREAT product. 
    • I invest in it.
    • I tell people to invest in it, as it’s great. 
    • Vendor basks in its success.
    • Vendor has idea to get more money out of that love.
    • That vendor implements things that annoys.
    • Vendor refuses to listen to its long term users.
    • Users get rage, move on.

    The critical part of this seems to be that vendors almost get arrogant in their success – once they have that arrogance, things start to go wrong.

    So what has 1Password done? Why has it annoyed me so much I’ve binned my investment in them, and re-invested in one of their competitors? Well – at a high level – they’ve got very arrogant in their position. Specifically, they’re pushing their user population to a subscription service, and yet seem to be completely disregarding their existing customers. Customers who have invested in their success. Firstly, one of the things I liked about 1Password was its ability to use local vaults that did not require uploading my stuff to someone else’s cloud in a way that that vendor could read your data. Other things I liked were it’s multi-platform support, and the fact that it in general worked well.

    What’s so wrong with the subscription service? Well…not that much I guess. It moves the financial model from a platform related one – I.e. Paid version, then you pay to upgrade – to a regular monthly cost. Of course they say it’s ‘the cost of a couple of cups of coffee a month!’.. Sure, that’s true, but it adds up to a lot more over time than the investments in platform/version type investments. In addition, 1Password have made no effort to smooth the journey for people who have already bought the client on (in my case) Mac OS, Windows, iOS and Android. I’ve effectively just burned that investment – so hell, I’ll burn it and invest again elsewhere. 

    WAIT, you think – this doesn’t of course mean the existing clients will stop working does it? No, it doesn’t. In addition 1Password has stated they’ve ‘no plans’ to remove the support for local vaults. They shout about that, and the words they use tell me one thing – that they’re ignoring the key question most people like me are asking: Will the existing non-subscription client continue to be updated? The silence on that question, from multiple people, speaks volumes. I’ve spent way, way too much time getting them to answer this question today. They’ve answered everything except this specific question. Make your own conclusions from that. A lot of users – me included – feel betrayed by them as a vendor

    So, it’s goodbye 1Password. I’ll take my investment elsewhere, and I’ll also advise everyone I talk to of exactly the same thing. This bit is interesting – as somebody who works in tech I get asked a lot about what to buy/use etc. I wonder what the effect of that is?

    Just for clarity, I don’t expect to buy one product and expect upgrades for free forever. I’ve no issue re-investing in new versions etc. This method of pushing to subscription though just bugs me. I have to pay again, for something I already have, and that uses a sync methodology I don’t want to use, and the platform I’ve already purchased will no longer get any updates. 

    So who else have I dealt with that have gone this route? Well, let’s think:

    • MOZY – the backup service. Initially a great service, at a great price. Gets you invested, and then they whack the price up. Mine went up nearly 1200% for example. So bye then.
    • Crashplan – same thing, attractive buy in, then the service got massively over-subscribed, and plummeted.
    • TomTom – pushing everyone to their subscription model, negating my fairly substantial investment in the existing apps from multiple countries.

    It just bothers me that companies get this success behind them, and then they go and crap all over the people that got them the success in the first place!

    Professionally I come across this too. How many times have I worked with customers where existing providers, vendors, and partners have just taken that customer for granted? Margins start to go up, response times go up, the vendor/partners ‘want’ of that business seems to assume that they’ve already won the business. Then along comes a hungrier partner, keen to get some good stuff going, and boom, that original partner is now sidelined and struggling to get back in the door.

    It’s a difficult thing to try and keep that hunger I think, in some respects anyway.

    The push to subscription services is a golden opportunity for a lot of companies. A golden opportunity to turn business into a nice annuity – few vendors have done it well in my experience. Who has done it well? Well, Office365 I think is an utter bargain for what you get. Same with the Adobe stuff. TomTom? Nah. It’s WAVE or Google Maps for me now. 

    Such is life I guess. Become arrogant in your success, and your success will become a lot more difficult to maintain. Annoy one of your customers – whether consumer or business – and it’s not just that customer you’ll lose, it’s everyone else that customer gets to influence too.

  • Protect your stuff!

    Haven’t blogged for a while. I’ve been busy with the day job, doing some properly interesting stuff. Without boring you all to tears I’ve moved back from being constantly in a sale/pre-sales environment and gone back to actually doing stuff. It’s what I enjoy, it’s what I’m good at – I think.. and it produces defined actual outcomes. Mac is in a happy place.

    Anyways, that’s not the point of this blog. I’m sure by now you’re all sick to death reading about the recent ransomware attack. Now, in the press it was all about the NHS – the UK’s National Health Service for my overseas readers. FREE DOCTORS for my American friends. The actual scope of the attack was far wider of course – lots and lots of people got hit by it.

    I’m not going to delve into that attack too much – like I say, you’re probably sick of hearing about it – but I did have an interesting conversation about how to protect your stuff against such things. It set me thinking about how I protect my data. 

    I’ll be honest and say I’m quite paranoid about my data. Why? Well, I’ve experienced losing some important things – think photos and some videos. Stuff you cannot reproduce. It’s utterly gutting. Some stuff would just be a pain in the backside to lose – but you can reproduce it. Documents and the like. Others – irreplaceable. 

    This paranoia has led me to have a really robust backup system – I think. So I thought I’d share my thoughts on how you make your stuff resilient to such attacks.

    There’s more to just protecting your data by having a copy of it – you need to protect against corruption too, regardless of whether that corruption is accidental or malicious. The malicious bit may take some explaining – let’s say for example you have a weeks worth of backups of your stuff. Now, you get infected by some pesky ransomware that slowly sits in the background encrypting your data….and in week three pops up the dreaded ‘Give us ONE MEEEELION DOLLAARS’ for your data. You’re utterly stuffed. It’s outside your backup window – all the stuff in your backups will already be infected with that crappy malware.

    Now I’m not going to preach to you about how to protect your stuff, but I thought some of you may find it interesting to see how *I* protect my data.

    For perspective, my typical active data is about 50Gb of work stuff, and about 200Gb of personal video/photos etc. I generate, on average, about 1Gb of work data a month (email and documents), and around 5Gb of personal stuff. I will point out that I archive and keep everything however, so your data production will likely be lower. Personally, storage is cheaper than my time to go through deleting emails I will never need. I just keep everything.

    If you’re not very techy, or don’t have the inclination, I’ve ordered the below stuff in a list of importance and ease to do.

    So, how do I do stuff? See below. Just to be clear – before I get a kicking in the comments – there are other things you need to do: Anti-Virus, keeping updates…updated etc. I’m specifically talking about how I handle backups.

    Automate your backups

    Firstly, and a really, really important point, is make your backups automatic. Why? Well, stuff that takes effort does not get done as often as it should. Also, it’s an effort. You have to do stuff. Both Windows and Mac OSX can fully automate backups for you:

    Apple Mac OS TimeMachine

    Windows 10 Backup

    I will honestly say that Apple’s TimeMachine absolutely knocks the socks off Microsoft in this area. You setup TimeMachine, and it backs up every hour for you. That’s it. You never need to do anything else. Windows – sure, you can do it, but it seems a lot more involved.

    Anyway, make the point of it automatic and you’ll *always* have backups of stuff. If I had one single recommendation, this would be it,

    Backup Media

    I have two backup media sets of 20Tb (Yer, I know – you probably won’t need anything like that) that I swap out once a month. What do I mean? Well, imagine in my setup that TimeMachine backs up my main machine every hour, on the hour, to that backup set. Let’s call it SetA. At the end of the month, I physically disconnect that backup set and stick it in a drawer – don’t panic…we’ll get to offsite in a minute – and then I connect another drive called ‘SetB’.

    Why? Well, it does numerous things: It protects against a failure of my backup drive(s), lengthens my backup window, and also provides a longer backup set and will protect against such ransomware encryption attacks. Perhaps not totally – more on that in a second.

    So how could you use this? Well, 2Tb drives are cheap. Let’s imagine you have a reasonable amount of data that a 2Tb drive could accommodate – buy two, and on the 1st of the month swap them over. Stick the other one in a drawer. If you want to be really fancy then stick it in a draw at your office.

    Offsite Backups

    Due to where I live, I’m blessed with a very good internet connection. I use this to backup up all of my stuff to an online service. Now, I use BackBlaze. It’s on my main machine, and it just sits there uploading my stuff to the BackBlaze service. OMG THEY’VE GOT ALL YOUR DATA! Calm your boots. I encrypt everything. Not the subject of this blog but if anyone’s interested happy to write about how I protect my own data when it hits the cloud? Let me know in the comments and I’ll sort something.

    I’ve the best part of a couple of Tb up in BackBlaze now and it works really well. It also keeps an archive of up to 30 days for each file so you have an archival history of each file backed up too. It’s a good service. NOTE: With any backup service, make sure you test restoring!

    Point-In-Time-Backups

    The other thing I do is take snapshot or point in time backups. What do I mean by this? Well, in addition to the automated stuff above – the regular TimeMachine backups, and the backup to BackBlaze – I also take ZIP (Well, RAR, but people know what ZIP is) backups of my changed data, usually weekly. I put these into a folder that:

    • Gets backed up to BackBlaze
    • Gets backed up to my normal hard disk regular backups

    Why do I do this? Well, simply to give me a point of time roll-back. I.e. I can go back and find all of my photos/documents etc. at a particular date. WAIT. Isn’t this covered above in the Offsite/Auto-stuff?? Well, yes, it is, but it enables one more thing……

    Non-Syncronised Offsite Backups

    This bit is key to protecting against ransomware. What I do is I take those point in time backups above, and I put them somewhere that isn’t synchronised anywhere on any of my machines. Think about this. I have a backup archive dated say 1st May 2017. I put it in a folder in DropBox that is *only* in DropBox. It’s not synchronised to any of my machines. How could any ransomware possible encrypt that and block me access? It can’t is the answer.

    It’s an incredibly simple thing to do. On DropBox for example you can do selective synchronisation. I create a folder on DropBox, and ensure it isn’t synchronised to any of my kit – all using that selective synchronisation. If you’ve already uploaded the stuff you can use the DropBox web site to copy the stuff to the folder too – you don’t need to upload it twice. This is important as if you’ve got a ton of data up there you don’t want to be uploading it again.

    So what does this give me? Well, it gives me a copy of all my stuff that my end-points (I.e. PCs, Macs etc.) can’t access to encrypt. It’s a simple solution to a complex issue.

    Summary

    Protecting your stuff shouldn’t be that hard, and it shouldn’t take very much technical know-how really. It would utterly break my heart to lose some of the photos, videos, content that I have – stuff that isn’t reproducible. So with some effort, I do my best to avoid that happening. As a side-result of that I protect other reproducible stuff in the same way……I don’t like having to re-do stuff.

    Anyways, it’s an interesting subject. As data-sets get bigger this is going to become more challenging, not less. I’m sure technology will keep up however.

  • Consumerism & IT (Working title – Everyone’s an expert)

    The consumerisation of technology has led to some interesting effects – namely people who do the odd bit of IT at home implementing IT systems for offices…and coming a bit unstuck for various reasons.

    In some respects a lot of us will have been there. I remember early on in my career some of the initial deployments of Netware 2/3 … Well, let’s say visiting those sites a few years later made me cringe somewhat. Still, you learn right? Educate and move on.

    Education & experience does seriously affect your world view though doesn’t it? For example I look now at code I wrote maybe 5 years ago – and while it’s functional – I often find myself thinking what on earth was I thinking? Or why did I write those 60 lines of code when a simple piece of sub-code would work? Also – I appear to have discovered in-code documentation. This is a good thing.

    Anyways, a little while ago a friend of mine who’s an owner of a small startup (well, I say small – it’s about 150 people now, and I say startup – they’re on year 4 I think) asked me for some guidance. Why me? Well, their issues were predominantly around using Skype in Office365, WebEx, GoToMeeting etc. They’d tried them all. All with similar issues – disconnections, poor quality etc.

    I swung by one day fully prepared for a free lunch, and then spent a good few hours scratching my head. I could see things were not working – what I couldn’t see was what wasn’t working. Yes, confusing statement. What I mean is that media products were terrible – laggy, disconnections, and just generally hideous. I got chatting to a few people around me and their general assumption was that it was ‘everything’. That ‘everything’ changed my view, and I started looking into general network performance.

    Internet speed tests – brilliant. Local network tests – what you’d expect from 1Gbps local connections. Then wait – what’s that? Did I just see Outlook disconnect for a few minutes too? What on earth. So I started to look harder at the network – random loooong times to connect. Wait what?! Anyways, I did some more digging and I see that there’s been some hacking on the workstations to massively increase the TCP/IP timeouts and retries. Aha, now we’re on to something.

    As part of my wandering about and getting coffee, I glanced at the IT rack. Yes, the IT rack is in the coffee area. Where else would you put it? Anyway, this consisted of a pile of Netgear switches on a shelf…in a usual uncomfortable mix of different colour cables, and lack of securing to the rack. In my idleness I started to work through the OCD damaging mess of cables – and the issue became clear. Something so, so basic that I’d almost forgotten to check for such things. Something was flashing in my brain about the 5-4-3 rule. 

    So…I start pulling the mess of cables apart – and this connection method becomes clear:

    Switches…done badly

    So day 1 they buy a cheap 8 port switch. Who doesn’t have a drawer full of them? Then they need a few more ports – so add another switch, plug in the Internet connection and the odd on-prem service. Then wait, we need more ports. Plug in new switch etc. Then we end up in the world of network notwork. Did you see what I did there? Kinda proud of it considering my level of coffee intake.

    Anyways, anyone with a basics of networking should see how inefficient the above is. A simple re-wire and guess what – everything now just works. Forgive the poor diagram – I’m still having breakfast:

    Switches done goodly

    It’s so easy to wander down to PC World (Other advice-filled wonderous providers are available), buy stuff, and just plug them in…and it mostly works isn’t it? Hell, this model probably would cope if all it were doing was delivering Netflix to the front room, kitchen and bedrooms. For work though, all those minor irritations, disconnections, poor quality media sessions – they rob your users of time and motivation, which of course robs your business of productivity.

    Anyways, here endeths my Tuesday breakfast sermon. Be careful of the fella that ‘does IT’ because he got Netflix working in his kitchen. Or something.

  • Goodbye Evernote…

    UPDATE: Over the last week I’ve received numerous emails on my customer’s email platforms (like many consultants, I end up with a lot of accounts) re-affirming that corporate data must not be kept on Evernote. How much damage have Evernote done to themselves here? It’s looking like a colossal backfire.

    Original Article

    I stumbled across the Evernote platform a number of years ago. It’s multi-platform, sync to any device, electronic-scrapbook method of operation became very useful, very quickly. Here I am now with the best part of 10Gb of information in there. Not any more however.

    Some articles flying around over the last couple of weeks set the alarm bells going off about their privacy policy. Take this one for example over at LifeHacker:

    Evernote Employees Can Read Your Notes, and There’s No Way to Opt-Out

    Even the headline made me sit up and take notice. So they can read your notes without opt-out made me immediately wonder about how. Surely my data is encrypted at rest within their cloud? So I can only assume they have access to the encryption keys. No, wait, how wrong I was. More on that in a second.

    Firstly, the update to the privacy policy due end of January 2017 stated that a machine learning tool may require Evernote employees to look at your data. Hmmm. Ok. Perhaps. But wait, even if you opt out of that, you absolutely cannot opt out of allowing Evernote employees to look at your data. Think about that for a minute.

    Now of course after a fair bit of pressure they’ve backed down on the changes, and sound contrite about it. See here:

    Evernote Revisits Privacy Policy Change in Response to Feedback

    By ‘feedback’ I assume they mean ‘rage’.

    Note that they say that Evernote employees won’t access your data without your permission. Now, after that thing over having to opt-out of stuff explicitly to stop them reading your data – well, it’s got my spider senses tingling for a couple of reasons.

    Firstly, what else have I opted in/out of that would allow them access whenever they like?

    Secondly – and far more important to me – is how are they accessing my data? There’s only two options here really – the data is stored at rest with no encryption, or Evernote employee’s have access to your encryption keys.

    After some investigation it would seem it’s the former – your data is not encrypted. A lot of talk about the encryption in the service really only is at a transport layer – I.e. Data to and from the US Data Centres. It is not encrypted at the data centre itself. (Update: It appears Google Cloud is encrypted at rest…but it’s not relevant really if Evernote staff can get a clear view of your data).

    That sets off all kinds of uncomfortable feelings.

    Looking into this even more, I can see that in the forums encryption at rest has been a long-requested feature. Why wouldn’t they implement it? Access – that’s why. I suppose there’s a marginal technical reason about workloads (encrypted data is slightly more expensive from a transactional point of view to process)….but hey, now they’ve moved to Google’s cloud there’s no issue there, is there?

    Yes, Google Cloud.

    So here we are now with your data, unencrypted at storage point, and within Google’s platform. The forum thread on the subject is interesting. 

    I am so not down with this I couldn’t get my stuff off their platform quick enough. In particular this one almost made me choke on my coffee:

    Google Quote Lols

    I’ve always been of the mindset that any company that has ‘Don’t be evil’ as a corporate motto has a reason for that motto being in place. This is not a good thing. I will add though that as far as I know Google’s Cloud does encrypt data at rest, so perhaps this is progress of a kind?

    Everything about this move – the issue around access to data, the lack of encryption, the move to Google – has my tech senses tingling like Spiderman at a Marvel party. Consider this from the forums:

    Evernote Quote
    • We don’t provide you with a feature that lets you client-side encrypt all your content in a way that we can no longer read it. 
    • The only end-to-end encryption feature we offer is note text encryption. We’ve had a lot of people voice their interest in full note, notebook, and account encryption, but we don’t have any plans to support that right now.
    • Both Evernote and Google will have access to data that you don’t manually encrypt using our note text encryption feature.

    Well ain’t that dandy.

    So, for me, sadly, it’s goodbye Evernote. The platform front end is great – really great – the back end, and their attitude to their user’s data, not so much.

    What will I replace it with? Well, not one product that’s for sure. I’m sure OneNote will be in the mix, as will making more use of my Office365 50Gb Mailbox, as will putting stuff in flat-file store again. All of them more preferable that what Evernote is doing with my data today.

    Sad times.

  • NO! We do not sell your data! (Working title – Email Aliases)

    If you read my blog you’ll know I have a certain healthy paranoia about security. I encrypt everything, and am at a loss why people don’t use Password Managers more often. From a finance point of view I go as far as having a separate cash account with low amounts of money in, and a separate credit card with a low limit. They’re the only ones I let near the Internet. Perhaps too paranoid.

    Anyway, I’ve a current little spat going on with a certain electricity provider who insists they don’t send on my details from when I’ve registered with their website. I find this hard to believe, as since I’ve registered with them I’ve been getting a ton of spam – not crappy ‘you’ve won a million dollars’ ones, but proper targeted advertising. How do I know it’s them? Well, it’s the email addresses I use for services.

    The email address for this account is similar to:

    RoadNOenergysupplier@mydomain.com

    …where ‘road’ is three digit abbreviation of the property address, NO is the house number, and supplier is the supplier – mydomain.com of course being my personal email address.

    NOTE: @Simon_kiteless mentioned over on that there Twitter that you can do this very simply with GMail, using the ‘+’. So you could enter ‘yourEmail+supplier@gmail.com’ and filter your email on that address. Very cool, thanks Simon. You can read more about this method on Gizmodo. 

    I do this for pretty much most key websites. This is for a few reasons:

    • It’s more secure. Somebody getting one email/password combo won’t automatically get access to every other site where you’ve used that combination.
    • It means I can track who’s doing what with my information.

    The primary purpose was the first one – the second one was a curiosity…and it’s the first time I’ve had a major provider swearing blind they don’t sell on my data, and yet…….

    For what it’s worth, does anyone else have a hotmail/outlook.com address they give to people they don’t want to really talk to? No? No, me neither.

    Anyway, this is pretty easy to do with most email providers. I use Office365 for example, which allows up to 200 email aliases. As I use a password manager I don’t have to remember them all. I also of course have a generic one that I use on sites I don’t really care about – and email for that address goes to a dump account that I check now and again.

    Anyways, I’m curious to see how this one plays out.

  • Why don’t more people use password managers?

    Woke up to see another data leak story today:

    O2 Customer Data Sold on the Dark Net

    When ever I see stuff like this it makes me double check my security for websites etc. to make sure I’m not accidentally doing anything daft. This morning though it got me thinking – how come more people don’t use Password Managers? Or more specifically – how do people make stuff secure without using password managers? It’s beyond me.

    Looking at my own for example I currently have 422 logins to various things. Yes. Four Hundred and Twenty Two. Bonkers.

    Without a PWD manager the likelihood is a lot of those sites would:

    • Use relatively simplistic patterns or words that are memorable.
    • Repeated across multiple websites. 
    • Instantly forgotten and constantly having to tick ‘forgotten password’ on sites.
    • Email to a single source meaning a single email could be compromised, resulting in the compromise of multiple sites.

    Now, I’m the first to admit I can be a bit paranoid about such stuff. I like to follow best practice. Achieving that though without a way of managing your passwords – difficult. 

    Working in IT means I get to ‘help’ a lot of my friends etc. with their computers, microwaves, shelves* etc. It astonishes me how poorly their general attitude to security is. Firstly, it’s rare someone will hand me their laptop and it be encrypted. They never think to question that I recover their stuff so quickly, just assuming it’s something down the black-art of ‘those computer things’. I’ll also often find things like text files on the desktop containing common email/password combinations and more often than not including the web site that they’re associated to.

    Utterly crazy.

    So why is a password manager so important? Well, the obvious one is that it stores all your passwords and makes them easy to access. It has other – more important – benefits too:

    • You can generate truly random passwords that even you won’t be able to remember. Stuff like SAjhhWJKH987KJJ71$$$!$%%%%_43 for example. Try remembering that. (There’s a legitimate argument against such passwords too, to be fair: See XKCD). 
    • You don’t have to remember all those ridiculous passwords! The manager will do it for you.
    • You can have unique passwords for every single site.
    • Most will do a security check through the passwords that are stored advising you of any poor or repeated selections.

    Wait – how do you secure your password manager? Well – of course you have to set a master password…And you need to be creative with that. I use phrases that make no sense for example, rather than short words. So something like ‘Jay likes to eat b0ats on a Sunday’** for example. Easy to remember as it’s so weird. You don’t find yourself typing it in very often either – iPad/iPhone it’s all TouchID, and on my main machines it locks when I lock my normal machines.

    Do I save passwords in my Browser? Yes, I do – for some sites. Never for any sites that hold any detail or financial information.

    What about the ‘tick here if you’ve forgotten your password’ – if they all go to the same email address then hey, you only need that email address compromising don’t you…? Well, of course I don’t setup a different email address for every website as that would be beyond silly – but I do have a few separate ones for secure sites. I don’t use the same email on any financial sites for example. Ever. That could however be part of my general paranoia and may be a bit beyond the norm – I’ll graciously accept that.

    Honestly, check out password managers. It’ll make your life more secure, and what’s more make your day to day easier too.

    Products like:

    LastPass

    1Password

    …are probably the most popular. All multi-device, all integrate natively in to your web browser etc.

    Get secure. It’s your responsibility, not the providers. It’s your data. They’ve always got a get out – oh we did our best. Blaming them all you like may make you feel better, but it’s still your secure data flying around that internet.

    *OOOO! You work in IT! Can you help me put a shelf up?

    ** No, this isn’t my passphrase. Even I’m not that daft. //scuttles off to change pass phrases.

  • Maximum Network MOS Scores Per Codec

    Placeholder: Not interesting!

  • Encryption…It’s for everyone

    Working in IT invariably means friends may ask you for help at some point. It’s the nature of it. Sometimes it’s help with a laptop, but it can also be random stuff like “Hey, you work in Computers! Can you help me with my new Microwave”. We’ve all been there.

    Anyways, recently I’ve been following the developments on Apple vs The US Government on iPhone encryption – and it got me thinking. How many people who aren’t in IT actually encrypt their stuff on purpose? I bet it’s not many. I’d be surprised if a lot of non-computer literate people actually did it, or understood why it’s important.

    Let’s take a recent event of somebody’s machine crashing and them not being able to start windows. Let’s forget for a second their extreme panic at the fact they had about 5 years worth of stuff on there…and no backup. Yeah, let’s forget that.

    So, I pull the hard disk out of the laptop, plug it in to mine…and there you go. I copy off all their data. Easily. Everything. Photos, documents, copies of their passport, bills, driving license…I’m sure you can see where I’m going with this.

    It’s so easy to do, it’s beyond terrifying. Imagine if they’d have had that laptop stolen, or left on a train?

    What about home stuff? Sure your home PC (people still buy those, right?) is safe isn’t it? After all, it’s got a password on it. Of course it isn’t. If the worst happened and you were burgled, and the base unit/external drives nicked…getting your data would be beyond easy. If it’s an external drive, just plug it in to something else.

    Personally, I encrypt absolutely everything. Every laptop, every external drive, anything that can be encrypted is encrypted. It’s the only thing that makes sense. Some may say I’m being over-cautious. Why am I? I have a sensible backup setup with multiple copies of stuff, on and off site, all encrypted of course. My risk to losing data is quite low.

    The risk of losing unencrypted data though – that is sky high. I don’t mean the risk of losing drives, I mean the risk to me should I lose one of those drives with data on. Could be customer data, personal data, could be anything. 

    Whatever it is, I don’t want people accessing it. The cost of the hardware/drive etc. is an annoyance. Losing the unencrypted data would be catastrophe.

    It’s not hard to encrypt stuff now – so really, you should make it happen.

    Use FileVault to encrypt the startup disk on your Mac

    Turn on Device Encryption

    It is beyond easy.

    Now, what about the scenario above where somebody has a laptop, and it’s encrypted, and they’ve never taken a backup…and it fails. Well, you’re stuffed pretty much. There are things you can do if it’s just an OS corruption or hardware issue like take the drive out and use the key to decrypt the drive, but it is harder.

    Me, I’d personally encrypt everything and have a decent backup. Failed drives to me are an annoyance, I can’t remember the last time it actually threatened any data.

    Anyways, encrypt your stuff. It’s important. Unless of course you only keep pictures of cats.

  • Why UC?

    It’s not often I write opinion pieces – I’m usually all about the technology and the how, and less about the why….so bear with me. I recently found myself having a coffee with quite a senior IT Director of a large organisation, and also a couple of very capable techies. The tech’s were focused on large infrastructure, so think large Exchange, AD, SQL and associated systems. I’ve known these people for a while – I came from a similar background.

    The conversation got to asking why am I so into UC type technologies, with a particular lean to the Microsoft offerings. Such an open question, and one you’d expect a short and easy answer to – but the more I thought about it, I realised it’s not an easy one to answer.

    My evolution in the world of technology consulting started with Novell Netware (version 2.11 if I remember rightly). I loved working on that stuff. Interesting, and it was an evolving market so interesting stuff. Worked right through the Netware 3.x and 4.x days, and then around the 4.x era Microsoft got in on the act. I remember sitting in a presentation over at Thames Valley Park and thinking that Windows was going to ruin Novell’s space – mainly through the commercial reality of it, even though the tech in me at the time didn’t think the platforms comparable. At the time, Novell was sold in user chunks, so 5, 25, 50, 250 seat licenses for example. Here was the upstart Microsoft offering a server that would do the file sharing (and much more) for just the server license cost – there were no CALs back then. Buy a single server license, connect as many clients as your server could handle. This seemed incredibly aggressive to me, and back then I wondered how it would work financially. Arguably it didn’t work – Microsoft later brought in the CAL model meaning each user would need a client access license. Ho hum.

    …Anyway, off in to the world of Microsoft I go. Working on largish Active Directory, Exchange and also for a while Citrix deployments. Good stuff, enjoyable, and kept me in holidays and biscuits. A fairly rapid developing area anyway – keeping up with the new technology was interesting, and integration to other platforms also become more viable as the product set developed too. Interesting stuff.

    The evolution of working on Microsoft Exchange and then moving into more realtime stuff like Office Communications Server (OCS), Lync, or Skype for Business seems a common one. Most of the people I work with in this space seem to have followed a similar route. This is the source of the originating question – why the passion to stay in realtime/unified comms?

    I thought there was a simple answer to this, and there really isn’t. So I’ve thought about this some more, and tried to break it down objectively.

    Technology

    From a technology perspective, working in UC means you have to have a good in-depth knowledge of multiple complex products across the stack, as well as a solid understanding of how vendors work together. Consider a Lync/Skype deployment for example:

    • Active Directory. AD is at the heart of the identity in S4B – getting user identity right is paramount to ensuring a high quality user experience. A lot of scripting/updating/architectural stuff here to be done.
    • SQL. SQL is used for various function – it really helps to know how this works. In some of the larger estates I’ve delivered (think 200k+ seats), script modifications in databases etc. weren’t unheard of.
    • Storage. Large transactional processing often requires storage that performs in the way you need. It’s not just a bunch of storage.
    • Virtualisation. VMWare/HyperV – how often do we install physical servers now? Even on the installs you do where you are installing physical units often a chunk of the estate will still be virtualised.
    • Networking. Fairly key to the delivery of media and a quality user experience.
    • Load Blalancing/Firewalls-Reverse Proxy. Possibly should be under the networking header, but you get my point. Knowing how these work is fundamental to a great UC setup.
    • Scripting. Anyone who says they’ve deployed/managed a large system of thousands of users, and who has no skills in PowerShell/VBScript type stuff…well, you’re either a glutton for punishment or I’m questioning your credentials.
    • End Points. PCs, Laptops, Phones, Mobiles, Tablets…You name it, a UC platform needs an answer to deployment of. Also includes technologies such as mobile device management.
    • Other Voice Vendors. You need a fairly good understanding of most of the other common vendors in this space too – Cisco, Avaya, Mitel etc.

    The above is just a selection of technologies I now get to work with every day. Would I get to work with the above wide stack if I’d have stayed in a more traditional AD/Exchange type role? Possibly some, but certainly not all….which also leads me on to the additional point of….

    What does UC do for the Users?

    Ever tried to get users excited about a new version of Exchange or Outlook? Tough call that one. Each new version of Exchange brings a lot of architectural advantages of course – they tend to help the IT Services deliver a better service though – or arguably, deliver more with less kit. Apart from perhaps large mailboxes….What value does it really bring the user? Of course Outlook has developed on – it’s a great product, and highly functional. I’ll take you back to my original point though, ever tried to get users excited about a new Exchange/Outlook deployment? Yeah, doesn’t happen in my experience.

    The same goes for other back-room IT type stuff too – if it doesn’t change the way a user works, then….from their perspective, what’s the point?

    Conversely, with UC, you’re offering truly new capabilities to users, and in my experience, people love the opportunities it gives them. The capabilities make people’s working days easier. Connect people, connect them well, from anywhere….People love it. Even doing things like product demos or model office type stuff, you can see people joining the dots on what capabilities these types of technologies bring to the work life. Depending on what direction you’re coming from, it either makes your existing work easier, or it can enable you to achieve more in the time you have at work. Either way, it tends to be a positive experience for the adopters.

    I’m not really talking about vendor aligned UC at this point either – Microsoft/Cisco etc. all have great solutions of course. From a user experience point of view, I’ve yet to see one that comes close to Microsoft’s however.

    Of course from a delivery point of view, to get this positivity out in the user population the user experience is absolutely key. If a user has to start thinking about how to do stuff, based on their device, location etc. well, it doesn’t work quite so well. For example, there’s a site I know where a coupe of remote offices use an Internet VPN for connectivity. This is absolutely fine, and mostly transparent to the user….apart from them not quite having set up the routing appropriately for Lync 2013. Users on that site can get basic services, but they run into issues with web conferencing, sharing etc. I won’t bore you with why this is, but essentially those users have become accustomed to know they have to disconnect from the corp network and connect via a public WiFi access point in order to gain access to the flakey services. Great user experience that isn’t.

    In the above scenario, the main sites that utilise the tech love it – it’s been listed repeatedly as the best thing IT have done for the employees in a while. Look at those couple of offices though, and they think the product set is terrible, unreliable……and this doesn’t sit well with me.

    Why work in the channel?

    The final thing we were discussing in relation to this, was that we’d all effectively spent our careers working in the IT Technology Delivery sector. That is working for the companies that repeatedly deliver this stuff, rather than working directly for end-clients who may operate such platforms, but perhaps only implement/upgrade etc. every few years.

    For me, the reasons for this relate back to my points about the technology. Working for a reseller means a stream of projects around the technologies that I’m interested in. It means you get far more exposure to those core technologies, often in greater detail, and often with higher backup from the vendors themselves too.

    Finish one 1k seat deployment? It’s off to your next one, with a new set of requirements, new unique business challenges, and yes, a new set of users to impress. It’s a different approach to design/implement/operate that I seem to observe working direct for the consumers of the technology.

    What’s Next?

    I suppose the next question is…what technology next? I’ve been fortunate to have spotted (probably by pure luck) evolving technology spaces – whether it was the jump from NDS to AD, or the excitement at remote desktop type services such as those delivered by Citrix. I saw the same opportunity with UC. Is UC mature now? Well, obviously it’s a lot more mature than it was. Working between platforms now is far easier than it once was, and people’s implementation of SIP seems to vaguely follow the same ideals now. So it’s certainly maturing, not convinced it’s totally mature yet.

    What’s next? Well, if I told you that, you’d all get in to it wouldn’t you? Watch this space.

  • Jabber vs Lync/Skype for Business

    Jabber & Skype – which is better? What a difficult question to answer. I personally much prefer the Lync/Skype experience to that of Cisco Jabber – but why? How do you quantify that? It’s a question that gets asked a lot by businesses now, and certainly ones that are heavily invested in Cisco.

    It’s an interesting fight isn’t it? I saw a comment from a Cisco commentator a while ago asking:

    ‘Do you want an Enterprise Communications Platform that also does Instant Messaging, or do you want an Enterprise Instant Messaging platform that also does some telephony?’.

    I think that’s a little disingenuous really, and doesn’t tell the whole story.

    There’s an (obviously one sided!) document from Microsoft here, that’s worth reviewing:

    Comparing Skype for Business versus Slack, Cisco, and Google Hangouts

    Technical Comparison

    The problem you can often run in to when running a technical assessment of products is a lack of differentiator. How do you choose between say a Mitel PBX and a Cisco one for example – on a list of functional capabilities? The gap isn’t big, if it exists at all. Yet people constantly do, much preferring the Cisco UCM route in my experience.

    It’s more than a list of functions.

    It’s the same with Skype vs Jabber. Look at them from a functional basis and what gaps do you see? Not many really:

    Both products do this in some form don’t they? …and yet.

    Do I need to choose?

    This is another interesting question. Do you have to choose between Cisco & Microsoft? Well, no, is the answer. A Cisco voice platform with Lync/Skype on top is probably one of the most common deployment topologies we see. Some view it as the best of both worlds – leveraging Cisco’s excellent Call manager platform, while utilising Microsoft’s brilliance in the software space. 

    The only comment I’d make on this model is that once a user starts using Skype for telephony, the user won’t care how you’re delivering that telephony from an architecture point of view – they’re a Skype user. Now, with that comes pressure from other users – they look on enviously at the roaming user who gets their services everywhere, and an any device…and they end up wanting it too. If you’re not careful you end up with a large proportion of your estate on Skype, with a smaller Cisco back end just supporting the transport. I’ve seen this happen a lot – you end up re-engineering around the Cisco platform.

    Field Experience

    I’ll be the first to point out this is subjective opinion…..I’ve never seen a site move from Lync to Jabber and the users enjoy the experience. Ever. Yet companies sometimes do this. Often as they’re invested in Cisco, and expanding on their Skype deployment may involve additional costs in product/licenses. Legitimate reasons of course, but it’s not one the users enjoy from my exposure to it.

    Integration

    I think this is where the gap starts to widen. Integration of Microsoft apps into, well, the Microsoft ecosystem is far stronger than Cisco’s (Who knew etc.). Everything from the look and feel – it’s all familiar, it’s all Office.

    Cisco isn’t that far behind to be fair, but the interface isn’t as familiar, and it doesn’t have the integration points that Microsoft has.

    Usage – Switching Modalities

    This is an interesting one – one of the things I love about Skype is the ability to start with one conversation type and quickly move to another. It’s simply how conversations go.

    Start with an IM, jump to voice, share some docs, drag in another person etc. You go from an IM to a full on-line multi-party conference simply and easily. Using different clients to achieve this drops the user’s drive to use them – jumping between Jabber and WebEx for example. You need to plan and have an idea of what you want to use. 

    The Office365 Juggernaut

    If usage is one thing that starts leveraging the difference, I think Office365 is really where value and capabilities start stretching the divide.

    Pretty much all of my clients are on the Office365 roadmap or considering it. Astonishing isn’t it – ALL. I can’t remember the last time I did on on-prem Exchange migration that wasn’t consolidation in expectation of a ‘365 move.

    This is where the Microsoft Value proposition comes in – Skype is in pretty much all of the Enterprise Subscriptions within Office365. It’s there – you buy an E3, and you get it by default. Whether you use it or not is a value choice – but why would you pay for something twice? WAIT – Jabber is free, right? Well, sure, apart from the infrastructure you need to run it on….

    You then take in to account what’s coming with Office365 & Skype for Business – the roadmap. Things such as:

    • Dial in PSTN Conferencing. We’re all familiar with this – dial a number, enter your PIN etc. This will be natively available in Office365 so you won’t need any additional kit on-prem. Even the lines will be provided by Office365, so you don’t even need to worry about channel consumption.
    • Native PSTN Calling. This is the ability to make normal phone calls directly from Skype within Office365 – so again, not requiring any on-prem equipment or lines. All the infrastructure is from Office365.
    • Integration to on-prem PBX. Soon enough you’ll be able to integrate your Office365 Skype users to your existing on-prem PBX. If you should want to.

    I think once you start looking at the value proposition from Microsoft and Office365 – the gap between Jabber and Skype for Business starts to get wider. Quickly.

    What about QoS! I need Enterprise Grade Voice Quality! Well, that’s coming too by way of Express Route

    In some respects I think with telephony and voice we are where we were maybe five years ago with Email. Back then the idea of putting all of your corp email into the cloud was a bit wild & crazy. Now – not so much. I suspect large groups of users will start utilising ‘365 for voice as their working model allows it. I would fit the model for example – never in the office, always work from remote sites or home, and never use a traditional desk phone.

    Delivering my telephony natively out of a cloud – why would I care? Truth is of course it’s exactly what I do today anyway – use Skype Consumer as my everyday phone.

    It won’t match for all users – users with more complex requirements will still need a more complete functional delivery…but…the cloud will catch up.

    Summary

    How to summarise without having the hounds of subjectivity after me in the comments section…I think the Skype for Business proposition is a stronger one on every level than Jabber. It’s a nicer environment to use, user’s like it, and the IT Business has a stronger roadmap for delivering better services, more efficiently (I.e. Cheaper) in the future.

    Microsoft have this right.

    Would I be disappointed if I ended up working for somebody who only had Jabber deployed? No, no I wouldn’t. It isn’t a bad platform – if you think I’m saying it is then I’ve obviously not written this in a way that represents my thoughts. I just think the SfB one, combined with the roadmap for ‘365, is a far stronger proposition.

  • Surface Pro 3 … Hmmm.

    Update 25/5/15

    Where am I now? Well, to be honest, the device rarely leaves my drawer, even after a small positive for a while. I just don’t understand the form factor. It just seems like one big compromise. As somebody who can touch-type (properly), the keyboard is painfully inadequate. I’d rater take a small laptop with me. 

    I found I was carrying the SP3 and my iPad everywhere. What’s the point of that? So as it stands, I’m back to using a small laptop as my travel buddy. Far more capable, keyboard is better, and not so compromised. Ho hum. Let’s see what the SP4 brings.

    Update 29/1/15.

    So, original article below from the end of October 2014. Where am I today? Well, surprisingly, my report back is a positive one! I’ve still the frustrations with Evernote and it’s terrible font size, but then my brain pointed out when I really need it I can use Evernote in a Web browse, and it’s just fine. I’d love them to fix it.

    I’ve got used to the unit, and it’s now more often than not my travel buddy! Even to the point I put some videos on it the other day to watch on the train…and my iPad stayed home. All the surprised.

    I still run in to the challenges of my job requiring power, and then I have to resort to other kit, but the Surface Pro 3 has found a far stronger place in my work life than I ever expected.

    Gone from 6/10 to an 8/10. 

    ====

    So, the Microsoft Surface Pro (3) – interesting concept. Not really a tablet or a laptop, but allegedly brilliant at both. I’ve got my hands on a rather shiny Surface Pro 3 – so I thought it would be an interesting journey to measure my usage over the week. 

    I’m going to try and use it as a replacement for my travel buddy – a 13” Macbook Pro Retina. Now, I get that some people who read my blog think I’m an Apple Fanboy. I honestly don’t think I am – I would say I’m a technology fan. I love stuff that’s cool, fun, and helps me get the job done. And is cool. 

    The last few years for me this has meant a Macbook Pro with Windows 8/8.1 running virtualised. Why? Well, I like Apple’s hardware, it fulfils the cool factor. And I’ve found the combination of the quality of the hardware with things like battery life really hit my technical cool spot. I like the power of the virtualisation capability of OS X – I can fire up anything very quickly, easily, and it just works. Now, before I get the hate-mail I know Windows laptops can do that too….but the oddity is that even though I have access to a wealth of laptops, phones, stuff etc. (for free, too, mostly), I always find myself gravitating to the Macbook Pro.

    So, I plan on logging my journey into using a Microsoft Surface Pro 3 as my travel buddy. I will try to be objective, but of course personal preference will come in. I’ll of course welcome any feedback. Even from you – yes, you – you know who you are.

    Day 1

    Unpacking is an interesting experience. They’ve taken some lessons from Apple – trying to make you enjoy the unpacking. Did they achieve it? Yes, but with some caveats. Firstly, the battery for the pen, and the small stick-on pen holder thing…rapidly disappears across the floor. Detracts from the cool, but ultimately not important. It’s just not quite right – they could do better! Perfection should be pursued in everything, for perfection to be perceived in anything.

    What next? Well, powering up the unit and getting going is no real hardship for anyone who powers up a new Windows laptop. Crap-ware free…apart from the 80+ Windows updates to be downloaded and installed, including one firmware update that reliably informed me that ‘that something had gone wrong’. Nearly two hours that took. Two. Hours. Two hours between taking the shiny out of the box and me being able to use it to appear funny and attractive to women on Twitter.

    Then I started installing all the software I use – of course it takes a while. Takes a while on OS X too, that and updates of course. I hate Windows Update. Hate it. With a passion. 

    So. The Pen. Works really well. I’ll say it functions better in OneNote than the EverNote pen does on the iPad. On the other hand, OneNote is the only app I could really find it worked well in at all. Oh, wow, except for maybe paint.

    This led me to my first twitch. Evernote. I am a big Evernote fan. It’s different to OneNote – Evernote is more like an electronic scrapbook – I put everything in it. Using it on the Surface Pro 3 has been a challenge, but to be clear it’s the inadequacy of the software here, not the platform. The Windows EverNote client is utter tripe compared to the OS X one, so I instantly hit a usability barrier. No Pen Support. Text is TINY compared to OS X – have to expand all the notes. Evernote Touch – well, the less said about that unstable, buggy, terrible POS the better.

    Anyway, the Pen. Works brilliantly in OneNote. Can’t really find anything else worth talking about. Oh, wow, apart from the ‘where do you keep it’ conundrum. You get a stick on pad that you can stick on the keyboard to hold it (Steve Jobs would never have allowed it. Oh. Wait.). Personally I find slotting the pen on to the closed keyboard cover far more convenient. Just feels unfinished.

    What about everything else? Well, I do find the weird mix between the touch environment and the desktop setup a bit weird – but I’m willing to embrace it. So I set all my email accounts up in both the normal Outlook 2013, and the non-Metro Mail client. I.e. use the not-Metro mail client for touch, normal Outlook for everything else.

    I’ve also got 1Password, EverNote (touch and desktop), all set up and ready to go. Also have my OneDrive (for Business and personal) all sync’ed up and working. That wasn’t that hard…as you’d expect.

    So…end of day 1 – where am I at? Well, let’s summarise, in bullets:

    • Ok, we can do this.
    • Wait, touching the screen for desktop apps is really fiddly.
    • Using the pen for desktop apps is clumsy.
    • Oh. Wait. Have a Bluetooth mouse somewhere – Microsoft Wedge – whoop.
    • Daughter: How do I play my videos? What the hell? Plugs in phone, shows phone drive, nothing else. Boggle.
    • Keyboard: Better than I expected! Expected almost spongy interim iPad type keyboard, actually feels just as good as my MBP keyboard.
    • Setup: Ye gods, how many updates.
    • Wait, Ethernet adapter looks like a 5 quid eBay job.
    • Keep touching screen at inappropriate times – desktop apps – getting frustrated and grabbing pen, then resorting to mouse.
    • Performance is ace – to the point that I’m not even sure which one it is I’m using.
    • Hate EverNote on Windows.
    • People use this as a tablet without the keyboard? What?
    • Wait, even more what, you don’t get a keyboard with this?
    • Screen – seems an odd resolution/shape? Looks ace though, as good as my iPad, maybe not quite the quality of my Retina MBP, but not enough to be readily noticeable.

    So, end of Day 1, it’s setup, it’s working…So let’s see what happens. I shall keep you updated.

    Actually, before I stop for day 1, interesting that I haven’t touched on the specs? Almost the same way I wouldn’t consider the specs of an iPad – it works, or it doesn’t. I think that means this works? Anyway, more anon.

    Edited to add an obvious point – I haven’t paid for this device, I’m not even sure of its price point. I shall investigate further and include opinions on that element.

    Day 2

    Ok, firstly, using this thing on your lap is a PITA. I’ve fixed it – using a tray*. So tricky using on the lap. It’s uncomfortable and tiring to try and do productive work with it on your lap. 

    *NSFW comedy language on that link. Quite possibly one of the funniest videos on YouTube though. Honest.

    So today is the first day I’ve tried to use this in anger…and I have to confess I failed and went back to using my MBP. I’ve been editing a lot of stuff today – screenshots, taking bits out of PDFs etc. – and quite frankly I was getting incredibly frustrated doing it on the SP3. To be fair I think it’s because the small Wedge mouse being tiny and not as usable as my main mouse. Now, on that front however I was quite happy editing all the stuff I ended up doing using the touchpad on my MBP? Couldn’t do it on the SP3 without giving myself the rage of frustration.

    So, not  a massively successful day with it truth be told. I will keep trying however, I want to believe.

    In evening I was also attempting to use it just as a tablet – for looking stuff up and consuming. Quite big to do that with, and a little clumsy truth be told. Totally possible however. Of course I miss things like being able to stream stuff to my sound system or my TV – all things that my (Insert Any Apple Device Here) takes in its stride. 

    Day 3

    Today I’ve spent the morning just using the SP3 – forcing myself too as I don’t have my MBP with me. Brave. How am I finding it? Well:

    • On a table it works like a laptop – who knew! A good one – fast, easy to use, getting more use to the touch screen. One thing I have noticed is that I’m one of those fortunate people who is practically ambidextrous so being able to use the mouse with one hand while randomly interacting with the screen on the other has made it far quicker to use than I initially realised.
    • It’s fast. Did I mention that? Not a massive spec this one either – i5 with 4Gb. It’s perfectly quick though to the point that I hadn’t really checked out the technical specifications.
    • Still not convinced by the ‘tablet’ element of it. As a touch screen laptop though – it’s a good one – especially combined with the Pen.

    The pen is interesting – found myself doing something earlier that surprised me. Sat on a conference call using the tablet part only, and the pen, to take notes in OneNotes. Notes I immediately emailed to Evernote of course.

    I think what’s becoming clear to me is that this is a great machine – fast, capable, and good to use…but I think I’ve become very aware of how dependent you are on the apps you use as your daily work-flow. These are often more important than the form factor of the device you’re using aren’t they? For example like I’ve said repeatedly above, I’m a very heavy Evernote user – I use it for everything. The Evernote client on Windows 8 is a challenge, whether you use the touch or the desktop app. I suppose if I were to persevere with the platform the correct thing would be to migrate to OneNote – while I get that OneNote is a great note taking app, it’s not good at what I also use Evernote for which is as an electronic scrapbook for everything that appears on all my devices. I’d not only be looking at changing device then I’d also be looking at changing work-flow.

    It’s amazing how important apps are. I’ve written before about how the only reason I keep with the phone I’m using is because of my investment in the apps on it. 

    How do I feel about this replacing my laptop and my iPad? Well, I think it can replace my iPad yes – except for when I’m out and about and just want to watch videos or read e-Magazines. It’s too big and bulky for that. It can though become my work travel buddy I think – one I take to meetings and work out on the road on.

    Can it replace my MBP and my iPad? No, I don’t think it again. To be clear though I’m not sure that’s down to the form factor or the device itself – it’s down to the way I like to work, and the apps I like to use. I think that makes some of my more complex stuff just harder to produce on the Surface than it is on my Windows equipped MBP? Incredibly subjective that one!

    I’m still utterly undecided. 

    Day 4 (Ok, not counting the weekend)

    I can confirm things are getting better! I actually used it as a tablet at the weekend to arrange some flights and some hotels – it was surprisingly workable. Perhaps I’m getting use to it?

    As a laptop replacement I’m also forcing myself to use it – and I’m slowly starting to get it. Like I point out above however, I think the apps you are use to, and how you work have a real bearing on whether a device like this will work for you. 

    Right now I’m feeling that it can as my travel buddy, but perhaps not as my main weapon of content production choice. Maybe that will change? I am putting the effort in, honest!

    I’ve discovered the MicroSD card slot for example, and now have Bitlocker encrypted File History setup and configured….and I like that. It is worlds apart from the ease of use of TimeMachine however on OS X. I had to dig around and find it for a start.

    The touch screen element is becoming more part of my general working as well, and again, I like that.

    Any issues? Well…..

    I can’t see me taking this out when I’m out for the day doing random (work/non-work) things. Say for media consumption for example. It’s too big. I’d take my iPad Air.

    If I were in the office all day working on a complex design document, I’d probably take my proper laptop. Saying that, my decision is closer than it was – I reckon if I found myself working on such a document and only had the Surface Pro 3 available I wouldn’t be massively overwhelmed.

    I guess where I’m going with this is that I’m struggling to find a place for the SP3. It’s not a replacement for my very powerful laptop (perhaps due to my usage type), and yet it’s a little too heavy to be a general travel ‘consumption’ device. What I can see myself using it for though is a replacement of my travel buddy for meetings and the like – it works well. It’s a great combination of light, powerful and comfortable form factor. Well, unless you want to use it on your lap, then it’s a PITA.

    Day 5, 6 and 7 

    Ok I got ruthlessly distracted by the real world and the Microsoft Decoded event. Where am I at? Well, I like it more than I did on days 1 thru 3 I can tell you that much. I do actually use it in anger now, and I found myself taking notes at the show with just the pen, straight in to OneNote. I was then of course later emailing those notes directly to EverNote where I keep everything else, but hey, it’s starting to work as a thing.

    Did have a bit of SNAFU earlier in the week when getting ruthlessly laughed at by somebody realising I was using my Macbook Pro as a tray so I could use the SP3 on my lap…..

    I stated earlier that I’m struggling to find a place for it – and I think that’s still true, but the statement needs some further qualification. If I had my ‘main’ work machine – whether it laptop, desktop, or whatever – then the Surface Pro 3 could absolutely be my only mobile device for work. Bizarrely though, if I were off out on the Tube (like I am in a bit), it would be my iPad Air that would come with me…for media consumption and web browsing on the go the Surface Pro 3 just cannot compete. It’s too big and clumsy.

    Could my iPhone 6 replace the iPad for media consumption? Well, here’s another little bizarre snippet. When the iPhone 6/6+ came out, I got a 6+…and I hated the size. Just didn’t get it. So I swapped it for a 6. Now, the 6 came with 128Gb of Storage, and that storage combined with the larger screen has changed how I use the unit – all of a sudden using it actively for EverNote (rather than just for reference) has become a reality…and guess what, I’m wishing I’d have stuck with the 6+! I’m certain that if I’d have kept the 6+, then that would be my travel consumption device of choice.

    Complicated isn’t it? Of course my situation is further complicated by the fact that I have access to such a wide range of devices, consisting of a simply spectacular 13” Retina Macbook Pro (I will say that I think is the best laptop I’ve ever used…by a country mile), iPads, phones and some pretty powerful but less mobile kit. It’s because of this choice I think that I’m struggling to find a complete ‘space’ for the Surface Pro 3? 

    So, let’s try and simplify it.

    If I had a Surface Pro 3 as a travel/mobile device, and a more powerful work unit (whether at home or at work), I think it would be a great solution. It would feel a bit compromised in that I think I’d need some form of lighter media/web consumption device – perhaps a larger phone.

    Could it totally replace my travel & work laptops? Not a chance – for me anyway – I suspect my compute demands may be just too high.

    My current perfect working environment? 

    iPad Air – personal media/web consumption

    Surface Pro 3 – general travel and presenting type stuff.

    MB Retina 13” –  ‘proper’ work away from base, so VMs, writing, productivity.

    Work Base – Multiple machines, from a 17” MBP to a Mac Pro.

    This is a fantastically flexible environment…but then…look at the price for all those things! The SP3, a tablet that can replace your laptop? Nah.

    I get this is a confusing piece of writing – but I think that tells a story in its own right doesn’t it?

  • Load Balancing Broadband Connections

    At home I have three Internet connections – two older ADSL lines, and a fibre connection. Why? Well, firstly I moved providers for ADSL and never really got around to cancelling the other one…and then got fibre, and, well, yeah.

    Anyway, I use the ADSL lines constantly – they’re loaded most work days with backups, uploads etc. (I work from home a lot), and I use the Fibre connection for everything else in my place – Movie streaming, UC workloads (Voice/Video etc). So, I do push those lines.

    One issue I use to have was trying to force certain services via specific connections. In fact I wrote about it ages ago, here:

    Crash plan Routing (As a side note, why did I ever think this format looked OK?!)

    I was using static routes to push services via certain connections – I eventually ended up putting in a small Cisco router to deal with the routing properly at the network layer, but it was never really ideal.

    Anyway, I recently stumbled across a product that claimed to be able to load-balance broadband connections (ADSL, Cable, 4G, fibre…whatever) – this one:

    TP-Link TL-R470T+ Load Balanced Broadband Router

    Catchy name, hey?

    This unit has up to four WAN ports and can load balance those WAN ports in various ways (more on that in a minute), across multiple providers. I found this idea interesting – I thought I could use it to load-balance the two ADSL lines I have to get better general performance from them, and also I found the idea of richer features attractive – static routing, firewall, more advanced NAT etc.

    So, with that in mind – I bought it. Forty quid is half a tank of fuel so it wasn’t a massive risk.

    Guess what? It works brilliantly well – it was up and running in about 10 minutes flat. Let’s have a look at how I set it up – click on the diagram to see a larger version:

    2014-09-14Network

    The ADSL routers are at their default settings – DHCP enabled etc. all that good stuff – so it’s just a matter of plugging them in. I then wire in an Airport Extreme WiFi base-station for the WiFi provision. Bear in mind you can’t use the WiFi on the ADSL routers as that wouldn’t be going via the load-balancer.

    So what was the result? Well, bear in mind I have two ADSL lines – one averaging about 7Mbps, and the other about 5.5Mbps – here’s the results:

    2014-09-16Speedtest

    I’ve also got my backup running at the minute – the upload is nearer 2Mbps without. Not too shabby? The general performance plays out as well, you can tell you’re getting far better performance than a single line can provide. I also tested media loads with Skype & Lync – usually notorious in these sort of setups, and it all worked just fine.

    Did I see any problems at all? Well, yes – passive FTP doesn’t work well over this type of load balancing – it’s very, very slow. Fortunately, there’s a way around that in the unit’s configuration – I can specify that certain ports/protocols only go via one connection:

    2014-09-16PolicyRouting

    So what I’ve done above is send anything for a destination port of TCP21 (FTP) over WAN link 1 only. Sorted that right out. You can also if you want add static routes via one particular WAN interface as well.

    Any other down-sides? Well, the ports are only 100Mb not Gigabit – so if you want to communicate between machines at faster speeds (Say for example one machine on Wifi 802.ac and another machine on wired Ethernet) then you will need the 802ac WiFi unit and the machine on Gb Ethernet plugged in to a Gigabit Switch, and that Gb switch plugged in to the load-balancer. (Or, if like me you’re using the AirPort Extreme from Apple, you’d ensure your Gigabit machines were plugged in to the switch on that).

    If you do it that way, then traffic from the WiFi 802.ac connection will go from the WiFi base-station, to the Gb switch, and then to the machine over Gb – so you’ll not hit any contention.

    If you just had the wired machine plugged in to the Load-balancer, along with the WiFi base-station, then you’ll be limited to 100Mb between the faster devices.

    Very, very pleasantly surprised at how capable this unit is, and how easy it is to configure.

    On the load-balancing piece, something to be aware of is the options in the load-balancing configuration. Have a look at this:

    2014-09-16LBConfig

    Out of the box, mine came configured for ‘Enable Application optimized routing’ – now, what this appears to do is allow a single application to connection over one of the connections. So multiple sessions/apps from a single machine will get balanced between the connections. What this doesn’t do is give you a higher general throughput. Testing at SpeedTest.net for example showed the source as alternating between the two providers.

    It’s not very intuitive but if you turn off both options then the connection seems to be load-balanced at a lower level and you get the higher throughputs reported – from my testing you do get the faster general experience as well.

    The unit itself has some other very cool stuff including:

    • Static routing – you can enter static routing information for specific targets/ranges.
    • Some very complex NAT options are possible.
    • Some quite advanced traffic management/control rules.
    • You can configure from 1 to 4 ports for load-balancing (Well, 2 as a minimum if you want to balance…).
    • Very functional firewall configuration options.

    This is quite a powerful little unit – all for forty quid!

    Anyway, I’ll be using this unit over the next few weeks – if i run into issues I’ll report back in here, but from my testing so far I’m really, really impressed. You can also view a video run through below:

  • Parallels Desktop 10.0

    Doesn’t seem that long ago that Parallels Desktop 9.0 was out – nor VMWare Fusion 6 – does it? My guess is there’s VMWare Fusion 7.0 just around the corner…

    Anyway, I wasn’t massively convinced by the upgrade to Parallels Desktop 9.0 – it didn’t readily offer me that much over the previous edition? Seemed upgrade money for no real benefit. Also, I ran into quite a few issues after the upgrade to 9, including:

    Issues with iSight Camera and Parallels Desktop 9.0
    Pauses & USB Connections/Disconnections

    …which to be fair were fixed in updates after a couple of months.

    So what about 10.0 then? Is it worth the money? Well, very early to say but initial suggestions tell me it very much is – but not necessarily for new functions and features as such, more for two key impacts of the upgrade, namely:

    Memory Footprint. This seems far improved in 10 – the same virtual machine on my late 2013 MBP Retina shows some 60% (!) less memory consumption after boot (which in my setup starts Outlook, Lync and a few other things). To be fair to Parallels, they state about a 10% saving – but on my work machines it was far larger.

    Battery Life Impact. This version seems to have a far less demanding load when running on battery.

    Just the above two key affects are a driver for me to upgrade – probably with the battery life one more so. I tend to avoid using virtualisation when on the move due to this impact, but the loading now seems more than reasonable, adding more functionality when mobile.

    Parallels also state that this version is compatible with OSX 10.10 Yosemite – to be fair, I’d not really ran into any issues using Parallels 9 on Yosemite either, but hey, who knows.

    Performance also seems to have taken a bit of a boost – and a real, noticeable one too. I wasn’t convinced with the claims of increases in performance with 8-9, however 9 to 10 really does seem to make a difference. Certainly in Office applications anyway – I’ve not noted any real performance increases with startup/shutdown or snapshot operations, but these were all very fast anyway. They state that there’s a 50% performance improvement….But who knows how valid that is – but I would agree it is faster.

    The Windows 7 look is still there, and it still uses Stardock.

    Some of the new features seem a little gimmicky – like the optimisation for example. All it seems to do is adjust the number of processor cores and RAM allocated to a machine? Not sure if there’s other optimisations in there, but I couldn’t really see it doing much more.

    10.10OSX_dump 2014-08-20 at 12.14.28
    10.10OSX_dump 2014-08-20 at 12.15.02

    There are some other neat touches to – for example running Outlook in a VM now shows on the Dock and indicates the number of unread messages in your selected mailbox, much like you would see in Apple Mail or in AirMail.

    Control Centre seems to have been tidied up too – everyone in one place, a little neater.

    One thing I do quite like is that all of a sudden coherence seems more usable – it’s smoother, and it’s less obvious you’re virtualising something. I don’t often use coherence, but I may do now.

    Anyways, early days – will report back after some more reasonable usage. Video below shows a run through on my late 2011 17” Macbook Pro.

  • Convergence/Divergence

    While I’m predominantly a techy, I do get very actively involved in our sales cycles for products and services around the Unified Communications solutions. Exposure to this sales cycle has led me to question the whole convergence story – or rather where we are on that story.

    Convergence was/is a big thing – sticking to UC/telephony the move from TDM to VoIP was a significant one, and offered many benefits to companies from leveraging existing investment in infrastructure to providing better and more usable services for users.

    For a while pretty much every tender that passed through my area was UC focussed – companies just weren’t looking to replace a phone on a desk with another phone on a desk. They wanted something functionally stronger, and able to offer more flexibility in usage topologies to the service consumers.

    This took the shape of telephony (of course), voice conferencing, integration to Email platforms, Instant Messaging/Presence, web conferencing etc. All the stacks that we’re all (in UC anyway) fairly conversed with.

    What’s becoming apparent (or maybe I’ve only just really thought about it) is that there are distinct sets of companies out there that don’t want UC. What they want is telephony – and they see VoIP as a way of getting that cheap telephony. They’re not readily interested in all the feature rich capabilities delivered by UC&C. They want phones on desk, for their users…and they want them cheaper. They often (appear) to view telephony and VoIP as cost expenditures to be eroded, rather than as an investment platform to drive productivity.

    So is UC & telephony diverging again? Did it ever truly converge in the first place? It’s an interesting concept – and one that matches some vendors more than others. Take Cisco for example. While I don’t have the figures to hand (I’m sure I could Google it) they state that they have more than 100K+ collaboration customers world-wide, and that 95% of Fortune 500 using Cisco UC. That all may well be true, however some 75-80% of that is dial-tone…and dial-tone isn’t UC is it?

    I’m not particularly picking on Cisco here; it’s just the main competitor to Microsoft that came to mind.

    This divergence or selection of telephony or a UC platform is a challenging space for Microsoft to compete in. For a pure telephone-on-a-desk topology that doesn’t want or need the higher functionality offered by the Lync platform then pricing becomes an issue – vendors such as Mitel, Avaya & Cisco have the opportunity to deliver a more competitive offering.

    Of course trying to ‘step up’ a VoIP Telephony platform to UC is where things start to become harder, and it certainly affects that competitive offering. More product and licensing is needed, often with associated infrastructure.

    I suspect 2014/15 will see some significant change the UC/Telephony arenas – Microsoft is gaining ground on the traditional vendors, and Microsoft’s competitors are also upping their game. Of course competition is good – it drives innovation doesn’t it? I suspect we’ll also see changes in the cloud delivery (I got so far without saying cloud…go me). In my experience our clients seem happier to consider cloud when contemplating functionally rich UC (or Contact Centre) type services, but less so for pure VoIP telephony.

    I can envisage that pattern starting to change – VoIP as a service is quite a compelling one in its own right if its delivered, priced and modelled correctly, and the service users are fully aware of the capabilities of the platform.

    Whatever happens, UC & VoIP is a fascinating area of the technology market to work in. It changes quickly, and offers a lot of innovative products and services in its overall stack. I like that, it keeps things fresh & new.

  • ISDN CrossOver Cable

    Here’s a little gotcha for you. Sometimes, you may want to connect your PBX to your Session Border Control (SBC) or Voice Gateway, via E1/T1…and for whatever reason you just can’t get the interface to come up.

    Sometimes you need a ISDN CrossOver cable in these scenarios – and guess what, this is not the same as a normal Cat5 Cross Over. They’re different pin configurations.

    A normal Cat5 cable is straight-forward connection, with a 1:1 Pin correlation. The colour scheme I normally use is (starting from the left, with the plug facing away from you):

    Striped-Orange
    Solid Orange
    Striped-Green
    Solid Blue
    Striped-Blue
    Solid Green
    Striped Brown
    Solid Brown

    ISDN CrossOver
    The ISDN Cross-Over swaps pairs 1/2 and 4/5, so the pin layouts you’re now interested in would be:

    1 to 4
    2 to 5
    4 to 1
    5 to 2

    If you’re making the cables then don’t try and cut down the cable to those cables only – it’s nearly impossible to make the cable! Just leave the other pins straight through. So using the above colour scheme the layout would be:

    ISDNCrossOver
  • Mobile Platform Choice

    I’ve been having some fairly interesting conversations about mobile device platform choice recently – one of the fundamental ones is my insistence that I think Windows Mobile 8 is a spectacularly good system. Of course the immediate response to this is to point to my iPhone/iPad and say ‘HA! It can’t be that good!’.

    What’s interesting about that is that some of my friends who are not so into technology need it pointing out to them that the real investment in mobile devices (phones/tablets) is way beyond the operating system and handset layer…It’s applications. Cue blank stares around the table.

    I’ve spent a lot of money over the years on IOS compatible apps – from things like TomTom to expense apps, to all kinds of stuff that form part of my day to day work & private functional life. To up & move from one platform to another is pretty much akin to asking somebody to drop everything on their Windows PC and move to a Mac where they can only run OSX apps (Yes, I know you can run Windows apps on a Mac, it’s the only thing that makes the platform tenable for me). It’s a very hard sell, and one that a lot of people just don’t seem to get.

    It’s also I think part of the story that needs to be applied when looking at adoption rates and usage of Windows Mobile 8 against Android & Apple IOS – it doesn’t really say much about the platform capabilities as such, it says far more about Microsoft being late to the party of having a great mobile environment with decent apps. Everyone is invested everywhere else.

    I’m sure a lot of my tech followers will read the above and just think, like der, it’s obvious. It is obvious – but you’ll be astonished by how many non-tech people assume WinMo must be rubbish due to its adoption rate. It simply isn’t. In fact, it’s so not rubbish that I’m not far off dumping my investment in IOS apps to go the Windows mobile route. Just for absolute clarity here – the only reason I’m still on an Apple iPhone is because of my investment in applications on my mobile & iPad, and how much I rely on them.

    It’s a monetary investment – it’s not whether the apps are also available on WinMo. I don’t want to have to buy them again – certainly not the more expensive ones anyway. Like I say above however…I’m not that far off it. The biggest thing that’s holding me back at the minute is my investment in TomTom on the iPhone. Sure, my car has SatNav – but like most car systems it’s utter tosh compared to TomTom on any device. They can develop it so much quicker and they’re not so dependent on a single out of date hardware platform. I’m toying with replacing my car – and that will have a far more modern SatNav in it – for a couple of years at least until TomTom catches up, and by then I’ll have forgotten about what I spent on it previously – meaning my dependency on the iPhone app will diminish.

    Where am I going with this? Well, I think vendors that are late to the market (Yes, you Microsoft) need to be looking at filling that application void. Looking at offering incentives and working models that drive traffic to your platform, not think ‘oh that’s a great platform but I’d lose everything I have already’. There must be a model where this could be workable for both Microsoft and their partner vendors? Who knows? If I did know, I’d not be worrying about what new car to buy that’s for sure. I’d buy both of them.

    If you’re application light, don’t dismiss Windows Mobile 8. It’s fantastically good, and you have a massive range of handsets.

    From a selfish perspective, driving competition can only end well. Competition drives innovations, makes better products and cooler tech. Everybody wins.

  • Automating Common Administrative Tasks

    In my working life of talking to many companies about their technology usage, and their deployment plans, I tend to find that the needs & wants of the average Systems/User Administrator are often forgotten. This, I think, is a dangerous mistake to make with a number of technology deployments as it can lead to issues and frustrations with deployment & administration.

    Spend some time making your Administration team’s lives simpler, and you’ll be repaid with faster turn around times, and fewer errors in administrative functions. Just for clarity on that last bit, I’m not suggesting that Administrators are error prone – far from it – but you ask anyone to manually configure telephony for 30 users (for example) and expect them to get them 100% right all the time – well, I think you’re asking for a lot.

    In my mind, I tend to think that if you are doing something specific, with a pattern, and repeatable in a manual method, well, quite frankly you’re doing it wrong. Wrong in that it’s slow, and probably more importantly – it’s error prone.

    Microsoft Lync, and Exchange, are prime examples. There are loads of PowerShell tools available for automating tasks, and for implementing certain functions, features, and processes fully automatically and with minimal input. The problem is though that they require scripting skills. A lot of Sys Admins are very comfortable with scripting – but it still takes time and effort. What about front-line user managers? The ones who set up users, who configure their telephony policy for example – do they know scripting? Do you WANT them to know hows to script-admin against your systems? You’d hope the risk on the last one would be negated by security policy of course, but that’s not really the point.

    When I’ve worked on larger projects I’ve always tried to put effort in to simplifying take on processes, whether those take-on processes are migrations from legacy, or delivery of new services. Make it simple, make it repeatable – and what you achieve are fewer errors, and faster take on. Fewer errors means less headaches from support calls, and fewer unhappy users during migration/take-on. I’m uncertain on the less/fewer in that sentence.

    How does that apply to Microsoft Lync & Exchange, my most common take-on/migration project? Well, there products have their own administration tools. Lync Control Panel for example. Having multiple tools does involve additional understanding and take-on from the administration staff. Admittedly it’s really not hard to administer users in Lync Control Panel – but it is something typically new, and it is something additional.

    The other thing – and probably the real drive – is that most common tasks are utterly repeatable. Think about that – repeatable. The task to achieve the the end game is the same, all the time. If that doesn’t shout out automation I don’t know what does.

    Setting up a new user in an organisation is a great example – Add the user in Active Directory User and Computers, add them to the right groups etc. That gives them their identity. Next, jump in to Exchange management and configure their mailbox. Then, jump in to Lync management and configure their Unified Comms stuff. Sure you can see where I’m going with this – it’s a faff. A repeatable faff that’s prone to error.

    How do I fix this? Well, I extend ADU&C to automate the common tasks of:

    • Configuring their Exchange mailbox
    • Configuring their Lync environment
    • Configuring their telephony in Lync etc.

    There’s absolutely no reason that this cannot be extended to take-ons too, rather than new users. For example, with Lync Deployments, I often put in scripts that enable Administrators to enable groups of people, or in certain situations whole Organisational Units for example.

    The result? Happier administrators, fewer take-on/enablement errors, fewer support calls, increased productivity, and a certain feeling of TECH AWESOME. You can’t argue with that can you?

    The video below gives a run through of some of the Lync stuff – it will give you a good idea of what I mean. The hi-def version can be viewed by clicking here.