Month: July 2018

  • When is storage not storage?

    When is plain old storage not plain old storage? When it’s Network Attached Storage (NAS) that’s when. 

    I don’t tend to delete stuff, as storage is relatively cheap and I usually find that if I delete something I, at some point in the near future, will be irritated that I’d deleted said thing. I have my email archives going back to 1999 for example. Yes, yes I know. 

    I’ve always shied away from network attached storage. Every time I’ve looked at it I’ve been caught by the network transfer rate bottleneck and the fact that locally attached storage has for the most part been a lot quicker. Most of my kit is SSD driven, and volume storage was fast Thunderbolt type. Typically I’d have a tiered storage approach:

    • Fast SSD for OS/Apps/normal day to day stuff.
    • Thunderbolt 3 connected SSD to my virtualisation stuff.
    • Spinney ‘volume’ storage.

    The thing is, my storage was getting into a mess. I had loads of stuff connected. About 12Tb off the back of my iMac#1 (My virtualisation machine, for stuff that’s running all the time), and about another 15Tb off the back of my everyday iMac Pro. That’s a lot of spinner stuff. Trying to ensure important stuff was backed up was becoming more and more of a headache. 

    So, I finally gave in, mainly due to the rage around cabling more than anything else, so I started investigating a small server type setup, but then it occurred to me I’d just be moving things about a bit, and I’d still have a ton of ad-hoc storage….So I started investigating the Network Attached Storage devices from the likes of Synology and QNAP.

    Oh my, how wrong was I about NAS units. They’re so capable it’s ridiculous, and they’re not just raw storage. I have a few of them now, and they’re doing things like:

    • Storage because storage because I needed storage
    • A couple of virtual machines that run some specific scripts that I use constantly.
    • Some SFTP sites.
    • A VPN host.
    • Plex for home media.
    • Volume snapshots for my day to day work areas.
    • Cloud-Sync with my DropBox/OneDrive accounts.
    • Backup to another unit.
    • Backup to another remote unit over the Internet (this is more of a replica for stuff I use elsewhere really).
    • Backup to a cloud service.

    I did run in to some performance issues as you can’t transfer to/from them faster than the 1Gbps connection – which is effectively around 110MB/s (Megabytes per second) – so 9-10 seconds per Gigabyte. My issue was that I had other stuff trying to run over the 1Gbps link to my main switch, so if I started copying up large files over the single 1Gbps links from my laptops or iMac(s) then of course everything would slow down.

    That was fairly simple to fix as the Synology units I purchased support link aggregation – so I setup a number of ports using LACP link aggregation (Effectively multiple 1Gbps links) and configured my main iMac machines with two 1Gbps link-aggregated ports. Now, I can copy up/from the Synology NAS units at 110MB/s and be running other network loads to other destinations, and not really experience any slow downs.

    Just to be clear – as I think there’s some confusion out there on link aggregation – aggregating 2 x 1Gbps connections will not allow you transfer between two devices at speeds >1Gbps as it doesn’t really load balance. It doesn’t for example send 1 packet down 1 link, and the next packet down the next. What it does is works out which is the least busy link and *uses that link for the operation you’re working on*.

    If I transfer to two targets however – like two different Synology NAS units with LACP – I can get circa 110MB/s to both of them. Imagine widening a motorway – it doesn’t increase the speed, but what it does do is allow you to send more cars down that road. (Ok, so that often kills the speed, and my analogy falls apart but I’m OK with that).

    I can’t imagine going back to traditional local attached storage for volume storage now. I do still have my fast SSD units attached however they’re tiny, and don’t produce a ton of cabling requirements.

    I regularly transfer 70-100Gb virtual machines up and down to these units, and even over 1Gbps this is proving to be acceptable. It’s not that far off locally attached spinning drives. It’s just taken about 15 minutes (I didn’t time it explicitly) to copy up an 80Gb virtual machine for example – that’s more than acceptable.

    The units also encrypt data at rest if you want – why would you not want that? I encrypt everything just because I can. Key management can be a challenge if you want to power off the units or reboot them as the keys for the encryption must either be:

    • Available locally on the NAS unit via a USB stick or similar so that the volumes can be auto-mounted.
    • or you have to type in your encryption passphrase to mount the volumes manually.

    It’s not really an issue as the units I have have been up for about 80 days now. It’s not like they’re rebooting every few days.

    The Synology units have an App Store with all kinds of stuff in there – properly useful things too:

    Synology Home Screen

    Anyway, I’m sure you can see where I’m going with this. These units are properly useful, and are certainly not just for storage – they are effectively small servers in their own right. I’ve upgraded the RAM in mine – some are easier to do than others – and also put some SSD read/write cache in my main unit. Have to say I wouldn’t bother with the SSD read/write cache as it’s not really made any difference to anything beyond some benchmarking boosts. I’d not know if they weren’t there.

    I’m completely sold. Best tech purchase and restructure of the year. Also, I’m definitely not now eyeing up 10Gbps connectivity. Oh no.

    As a quick side-note on the 1Gbps/10Gbps thing – does anyone else remember trying 1Gbps connectivity for the first time over say 100Mbps connectivity? I remember being blown away by it. Now, not so much. Even my Internet connection is 1Gbps. 10Gbps here I come. Probably.

  • Oh Mer Gawd 2018 Macbook Pro (Apple)

    2018-08-02 Brief update – I’ve added a video clip at the bottom showing the 2018 i9 MacBook Pro against the 2017 iMac Pro 10 Core/64Gb RAM unit).

    iMac completes it in 4 min 43 seconds, 2018 i9 MacBook Pro 9 Minutes 33 seconds. 

    iMac Pro Example Times

    ====

    Anybody with any interest in tech can’t fail to have noticed all the noise around the recent update to the Apple MacBook Pro range – in particular around thermal throttling and the performance of the units. To be fair to Apple, they responded quickly ( surprisingly so… ) and the patches they published did seem to make a significant difference.

    I’m not particularly in to the ‘OMG it wouldn’t have happened under Steve Jobs’ thing, but I can’t help thinking this wouldn’t have happened under…//slaps myself. It does however seem that Apple’s quality control is struggling a bit doesn’t it? Let’s think, there was the ridiculous High Sierra Security Bug, the throttling issue, T2 kernel panics (work in progress), iPhone throttling, and of course the issue around the failing keyboards.

    As a quick side-note, if you’re interested in the repairability side of Apple kit, you could do worse than to subscribe to Louis Rossmann’s channel over on YouTube. It’s fair to say he’s not a fan of Apple’s attitude to repairability! Quite reasonably so from what I’ve seen.

    I’m not going to bang on about the throttling, however I thought I would look at the performance of the 2018 unit and compare it to my older ones, and just for the laughs to the Surface Book 2 I also have. You can see the results in the video below – spoilers: The i9 is faster than all of them, who knew.

    I should also do a comparison against my iMac Pro but I can’t at the minute as Apple is busy swapping it for another one. More on that in a later blog, once I have an outcome. 

    So, quick summary. Do I like the 2018 unit? Is it a big step up from the 2017? Do I have any concerns? Well:

    Yes. It’s fast.

    Yes. The 32Gb upgrade is going to help me a lot. In some instances I’ve had to carry two laptops to achieve some things (admittedly an unusual use case), but I won’t have to do that any more.

    Concerns? Wow it runs hot. That makes me wonder about reliability. But. We shall see.

    Anyway, the video is below should you be interested. You can see the information on the 2017 unit here: 2017 Kaby Lake MacBook Pro

    Comparison with the 2017 iMac Pro:

  • Faster Internet – FOR FREE!

    Faster Internet – for FREE! Can you really get something for nothing? Well perhaps not, but there are things you can do to both optimise your Internet connection and protect your usage.

    What do I mean? Well, given most of my readership tends to be techy in nature, I’m not going to go in to massive amounts of detail, but in effect every Internet Provider tends to assign you their DNS servers…and these are usually far from optimal. A lot of techs I know then default to switching to Google’s DNS (8.8.8.8 and 8.8.4.4) because they’re pretty quick.

    Yes, they’re pretty quick…But you’re gifting Google with the ability to know every URL you resolve to an IP address. If you’re comfortable with that then fair enough – I’m not, however. Google makes me uncomfortable from a privacy perspective.

    So, let’s look at Cloudflare. Many of you will be familiar with them with their Web Caching technologies, but few seem to be aware they also have DNS servers available – 1.1.1.1 and 1.0.0.1. Cool addresses hey? Forgetting the cool addressing, just look at the performance – they’re properly fast.

    There’s various DNS benchmarking tools out there – OK, they’re not the most interesting of tools but they do give you decent information. Consider the performance difference between the Google servers and Cloudflare:

    DNS Performance

    As you can see, in other than localised cached versions CloudFlare nails the performance – and the reliability – in all the required areas. I know it doesn’t look like much, but the differences add up, and you can feel the difference.

    What about the privacy thing about the provider knowing everything you do? Well, I suppose there is an element of me just not trusting Google – any company that needed a tag line of ‘don’t be evil‘ has issues – CloudFlare do seem to have a defined policy of never storing your queries on disk, and being audited to ensure this is true. Apparently. I have no proof this is true, beyond stated policy.

    Anyway, you can read the launch blog here:

    Announcing 1.1.1.1: the fastest, privacy-first consumer DNS service

    I’ve been using the service for a while, and it is faster than the ones I was using previously, and by some margin. The privacy element is a nice cherry on the cake.

    The future expansion plans to cache more can only provide better response times you’d hope.

    Oh, as a funny and slightly bizarre side-note, some ISPs won’t actually be able to route to 1.1.1.1. I’m sure they’re working on resolving that – it’s easy to check if you can use the system simply by firing up nslookup (whether in Windows/Linux/MacOS) and then selecting the server with ‘server 1.1.1.1’ and seeing if you can resolve any addresses:

    NSLookup Example

    How you implement this for your Internet connection varies – on my platform for example I have my own DNS server that caches stuff, so I just CloudFlare as a resolver to that. You can also update your DHCP on your router to issue 1.1.1.1 and 1.0.0.1 – that’s probably the simplest way of doing it for most people I imagine.

    It really does make a difference.

    Honest.