- Create, mount, unmount, and use vfat, ext4, and xfs file systems
mkfs.xfs, mkfs.ext4, mkfs.vfat
- Mount and unmount CIFS and NFS network file systems
mount –t cifs server:/vol/share /mnt/share -o user=userid,pass=pword,dom=AD
mount –t nfs server:/vol/share /mnt/share
- Extend existing logical volumes
- Create and configure set-GID directories for collaboration
setgid dirs make created files/dirs have the setgid group id
chgrp mygroup ./directory
chmod 2755 ./directory
- Create and manage Access Control Lists (ACLs)
- Diagnose and correct file permission problems
chmod, chown, getfacl, setfacl
- List, create, delete partitions on MBR and GPT disks
fdisk gdisk parted
- Create and remove physical volumes, assign physical volumes to volume groups, and create and delete logical volumes
using a new disk/partition with LVM: pvcreate /dev/device
creating a new volume group: vgcreate VG00 /dev/device
adding a PV to an existing volume group: vgextend VG00 /dev/device
creating a logical volume: lvcreate -L 100G -n lvhome VG00
- Configure systems to mount file systems at boot by Universally Unique ID (UUID) or label
blkid to get the UUID/label, then add to /etc/fstab
can set label on ext filesystems with tune2fs or e2label
- Add new partitions and logical volumes, and swap to a system non-destructively
as per above commands!
need to set the fstype correctly with fdisk/gdisk/parted
- Boot, reboot, and shut down a system normally
reboot, poweroff, shutdown, wall
The systemctl commands are preferred.
- Boot systems into different targets manually
systemctl get-default, systemctl set-default multi-user.target
systemctl rescue, systemctl emergency, systemctl isolate multi-user.target
systemctl set-default graphical.target
- Interrupt the boot process in order to gain access to a system
Esc in grub, e to edit, find linux16 line, CTRL-E to get to end of line and
Boot to rescue mode:
append system.unit=rescue.targetBoot to change root passwd:
remove rhgb and quiet (if there)
append rd.break enforcing=0 to break after ramdisk, and turn off SElinux
mount –o remount,rw /sysroot
mount –o remount,ro /sysroot
exit (continues boot process)
- Identify CPU/memory intensive processes, adjust process priority with renice, and kill processes
top, nice –n <nnn>, renice +5, kill, kill -9
you can only lower the priority of your processes, unless you are root which can raise them too
- Locate and interpret system log files and journals
- Access a virtual machine’s console
- Start and stop virtual machines
virsh start myVM
virtsh shutdown myVM
virsh reboot myVM
- Start, stop, and check the status of network services
systemctl start/stop/status network.service
Securely transfer files between systems
scp file user@system2:/path/newfile
- Configure firewall settings using firewall-config, firewall-cmd, or iptables
firewall-config (graphical tool)
- Configure key-based authentication for SSH
ssh-keygen –t rsa
- Set enforcing and permissive modes for SELinux
boot parameter “enforcing=0|1”
Edit /etc/sysconfig/selinux applied at reboot
- List and identify SELinux file and process context
- Restore default file contexts
- Use boolean settings to modify system SELinux settings
sestatus –b | grep ‘httpd’
- Diagnose and address routine SELinux policy violations
view SELinux violations: sealert
fix basic problems: restorecon, or with the instructions shown
- Access a shell prompt and issue commands with correct syntax
bash shell, case sensitivity, pwd, ls, cd
- Use input-output redirection (>, >>, |, 2>, etc.)
redirect to and from files: > < >> << 1> 2>
pipe between commands: |
prevent > overwriting an existing file: set –o noclobber
cat – <<%
- Use grep and regular expressions to analyze text
grep “string” file.txt
egrep “string1|string2” file.txt
- Access remote systems using ssh
- Log in and switch users in multiuser targets
su, su – username, sudo
- Archive, compress, unpack, and uncompress files using tar, star, gzip, and bzip2
tar cvf xvf file.tar file.*
(add z for gzip, Z for compress, j for bzip2)
star -xattr -H=exustar -c -f=test.star file.*
cpio –iv / -ov
- Create and edit text files
- Create, delete, copy, and move files and directories
rm mv touch cp mkdir rmdir
- Create hard and soft links
hard link (inodes point to same blocks): ln file newfile
soft link (indirect pointer): ln –s file newfile
directories have to be soft links
- List, set, and change standard ugo/rwx permissions
chmod 777 file, chmod a+rwx file
7 is made from the sum of 4 (read) 2 (write) and 1 (execute)
So 5 would be read + execute, 4 would be read only
3 digits for User, Group, Other
- Locate, read, and use system documentation including man, info, and files in /usr/share/doc
man ps, whatis ps, apropos ps, info bash
Install SElinux man pages: yum install -y selinux-policy-devel;mandb
I’ve been quiet for the last few months, mainly because I’ve been working on a Backup project, with not so much focus on Virtualisation.
Prior to this, I’d mostly left it to the professionals, as it had generally fallen into the remit of the storage teams, but when I finished off my previous projects, and the music stopped, the only chair remaining was on a ‘behind schedule’ backup capacity project.
I’m not going to go into the why’s and wherefore’s of why the project was stuck in limbo, but I decided to share my thoughts on what I’d learned from it.
- Requirements, Requirements, Requirements
If a project doesn’t have them, how do you know if you’re successful. This can, and almost certainly will, lead to scope creep in several directions if not nailed down.
- How much data are we trying to protect
- How much replication traffic will there be between sites
- How many simultaneous streams do we need to support
- How many servers are we backing up
- How much will de-duplication save
- What kind of data are we protecting
- How will we transfer the data
- What are we protecting against
- Accidental deletion
- Filesystem corruption
- Loss of a server
- Loss of a storage subsystem
- Loss of a site
- Rogue agents within the business
- What’s the minimum RPO and maximum RTO we are aiming for
- This will be affected by backup size/duration/policy
- Are there any specific security requirements
- Encryption – minimum cipher strength, on data and/or control traffic
- Authorisation – granularity of access control
I’m sure those who have spent more time dealing with backups than I have, could easily add to this list!
- Get people to think about what they are requesting backups for, and the impact of taking them, or not taking them.
If you don’t put some constraints on what should be backed up, you might end up trying to backup the world. Nervous admins will invariably ask for everything to be backed up, “just to be on the safe side”, when the service might be recoverable through an automated build process.
Some questions to think about are:
- Why do we need to backup this data (or server)?
- Why do we need to backup this data now, if it hasn’t been backed up before?
- Can we not recover the data or server any other way?
- How would we recover the service, if the data is restored from backup?
- What would be the impact if the data is lost and we couldn’t recover it?
- What is the failure scenario we are aiming to recover from?
- How quickly does the data need to be recovered? (RTO)
- How recent does the backup need to be, to be worthwhile? (RPO)
- How long do the backups need to be kept? (Retention)
- Who owns the data?
- Is the data subject to any Compliance legislation? (eg PCI DSS)
- When can the backups be taken?
- Is there any impact to the service when the backups are running?
- Is there any impact to the service if the backups are not working?
- If a backup is missed, do we need to reschedule it?
- Do the backups need to go off-site, or to a different geographic region?
- What is the size of the backup?
- What is the delta change?
- Will there be a regularly scheduled restore test?
- Who can request a data restore?
- Who can request expiry of the stored backups?
- Who can request removal of the backup policy?
If you don’t put some constraints on what should be backed up, you might end up trying to backup the world. This set of questions can help you verify the need for a backup, as well as important constraints and factors that will be needed for running them in day-to-day operations.
- Other thoughts
- Before you put the new backup service into production usage:
- Get all your install/config/upgrade automation tested
- Get everything on matching versions
- Run vulnerability scans and fix any issues
- Involve your operational teams
- Get your processes and procedures agreed
- Plan out and prioritise your project tasks, make sure you deliver what is required, save anything else for ‘phase 2’
- And finally, learn to let go! Tie up (or hand over) any loose ends, and let the operational staff run the backups.
Ok, I’m finding this one difficult, it’s become my baby for the last 4 months, but I’m trying.
PLEASE TAKE AWAY MY ACCESS SO I CAN’T KEEP CHECKING IT’S ALL STILL WORKING!!!!
- Before you put the new backup service into production usage:
I ventured over the Penines yesterday to attend the VMware TAM round table meeting, being held in the Manchester Piccadilly Hilton. This provided the opportunity to meet both our companies outgoing TAM and our new one, and learn a little more about what our current contract can provide, that we’re not really utilising.
I also made notes on the presentations, and thought I would share them in case it’s useful to someone else. Apologies in advance to the presenters for my paraphrasing – if the decks are made available, I may update these notes!
I arrived pretty early, due to the train times. Tea/Coffee and pastries were provided, then we converged on the meeting room. After introductions, the first topic up was Simon Todd, covering VSAN 6.5
VSAN 6.5 – Simon Todd
- SKY – one of the UKs largest VSAN implementations at 6 Petabytes
- Using it to maintain competitiveness, and to provide grow as you go, expand when needed.
- Used for production workloads, eg SQL, Exchange, On Demand video, Sky Q, Video Transcoding, UHD streaming
- Water Utility Company
- 66-74% cost saving on VM storage cost
- Procurement cycle went from 3-6 months for traditional SAN to 7 days for extra capacity
- Billing run for 7M customers dropped from 16 hours to 3 hours
- A380 – runs VSAN to collect data from 300k sensors for data analysis for preventative maintenance. Every hour saved on the ground saves $25k
- Oil Rigs, Nuclear subs, Aircraft carriers – anywhere that server maintenance is tricky
- Have to use the right tools, and use them in the right way
- iometer is a legacy tool, if you’re testing All-Flash storage you need to use >1 outstanding IOs per target – the manufacturer IO figures usually state the queue length and block size.
- For testing, use VSAN proactive tests, HCI bench, you can use iometer but have to understand how SSDs work and set the configuration accordingly
- Performance stats now available in the vCenter Web Client (since 6.0U2), going back 90 days.
- Have to make sure RAID controller, firmware versions and driver versions match what is on the HCL
- Can use ready nodes to ensure they’re correct from out of the box. If they don’t match your requirements, you are able to increase the spec (more memory, storage) the default spec is just a minimum.
- Can mix multiple vendors in a single cluster – try and keep the specifications the same (CPU/Mem/Storage) to avoid wastage.
- Ideally have 2 Disk Groups per host (this means minimum of 2 cache devices)
- Use multiple capacity devices per Disk Group
- VMware are working on having VSAN managing firmware and driver revisions, to help you with matching the HCL
- Ensure MTU > 1500
- Use a different multicast address per cluster
- 10GigE is a *must* for all flash
- Use Network IO Control if you have shared interfaces (usually the case if you’re using 10GigE)
VSAN 6.5 – what’s new?
- iSCSI access
- Provide block storage to servers not in the VSAN cluster
- Can use for eg Oracle RAC, Physical workloads
- Max LUN size is 62TB
- Still enables Dedupe and Compression, RAID0/1/5
- 2 node direct connect – connect 2 nodes with crossover cables and have a remote witness – this enables a low cost ROBO entry point for VSAN
- Supports NVMe, 512e storage, 100Gbps networking
- New PowerCLI support
- Health check and remediation
- Lots of new cmdlets
- VSAN now ready for
- VMware Integrated Containers
- Docker Volume Driver
- All Flash is now supported in all license versions, higher license versions add things like dedupe/compression
A useful website is https://StorageHub.vmware.com
Cloud Foundation – Lee Dilworth
Lee provided an overview of the new VMware Cloud Foundation offering. From a personal viewpoint, it seems like a new ‘unified SDDC platform’ seems to be offered each year, but maybe that’s just my perception….
- High demand for technology that simplifies infrastructure, but hard to integrate the different technologies.
- SDDC Manager – provides Automated Lifecycle Management, of Compute, Network and Storage
- This is an ‘Integrated Platform’, vSphere + NSX + VSAN
- Provides Cross Cloud Architecture, Private and Public Cloud (AWS)
- Can be used on a limited range of VSAN ready nodes (3 vendors at present, including Dell), or VxRACK
- Based on full stack vSphere (vSphere + NSX + VSAN) with SDDC manager on top, plus a range of optional components such as LogInsight, VRO, and VRA via external integration.
- SDDC Manager
- Single point management (manages Hardware and Software)
- One management domain
- One to multiple workload domains
- Provides full lifecycle management
- Integrates into the Web Client
- Hardware management
- Uses OOB management agents in Top Of Rack switches
- Provides Discovery, Bootstrap, Monitoring
- Uses both In Band and Out of Band connections
- Management Domain
- One management domain per cloud instance
- Uses 3 nodes minimum but 4 recommended
- Dedicated VCenter plus redundant PSCs
- Both vDS and NSX vSwitch
- Workload Domains
- Either VDI or standard Virtual Infrastructure
- Carved out by the SDDC manager
- Dedicated VC in management domain
- Shared SSO with management PSCs
- NSX – dedicated NSX Manager in management domain, controllers in workload domain.
- Can automatically deploy and patch vSphere, NSX, VSAN
- Can deploy but not currently patch LogInsight etc
- You can upgrade workload domains independently
- Minimum of 8 nodes (4 mgmt, 4 workload)
- Maximum of 8 racks
- VSAN All Flash *or* Hybrid, and can even use network attached storage
Training and Certification Update – Ed Wills (I think!)
There was a short session to give an update on the latest training courses.
- What’s new 5.5->6.5 – 3 days
- vSphere ICM 6.5 – 5 days
- vSphere Optimize and Scale 6.5 – 5 days
- Cloud Automation Design and Deploy 7.1 – 5 days
- vCD ICM 8.1 – 5 days
- Cloud Orchestration and Extensibility – 5 days
- Horizon 7 ICM & App Volumes – 5 days
- NSX ICM & Troubleshooting and Operations 6.2 – 5 days
- vSphere ICM & VSAN 6.5 – 5 days
Enterprise Learning Subscription
This was something I’d not heard of before, but you can register people for 75 training credits per person per year and get access to:
- All on-demand courses
- Learning Zone
- Exam prep materials
- VCP exam voucher
There is a minimum of 5 people per company.
Training Needs Analysis
This is a new offering, where VMware will perform an analysis of what training your staff require.
It considers business needs, current staff competencies, training methods, cost, effectiveness, and produces a benchmark of the current state, what training is required and why, priorities, where the training will be delivered, who should receive it, how the training will be delivered and how much it will cost.
vRealise Automation – Kim Ranyard
Kim gave an overview of the history of vRA
- It was originally DynamicOps Cloud Automation Center
- Then bought by VMware
- Released as vCAC 5.1 -> 5.2 -> 6.0 -> 6.1
- Then vRA 6.2 -> 7.0 -> 7.0.1 -> 7.1 -> 7.2
- Designed to accelerate time-to-value
- Simplified Virtual Appliances HA Landscape
(instead of needing large numbers of VMs to get it up and working, condensed to 1, or 2 for HA)
- Enhanced Authentication capabilities
- Per-tenant branding of the portal
- Unified Service Design
- Converged Application Authoring
- Out-of-the-Box blueprints for more apps, such as MS SQL Server, LAMP stack
- Able to dynamically configure NSX components
- Blueprints as code – you can export/import blueprints as YAML
- Event Broker – provides centralised policy management, helps to integrate with vRO workflows
- Now includes a silent install option
- Can migrate from 6.2 to 7.1
- Fixes a number of 6.x upgrade blockers
- Includes a number of provisioning enhancements, eg provision eager-zero disks, change number of vCPU on a VM
- Data collection improvements
- Picks up vSphere Infrastructure changes better, in case someone makes a change outside of vRA
- Has Out-of-the-Box IPAM integrations
- Includes more Ready-to-Import blueprints
- Can now scale out/in a service (blueprints only), eg add additional app servers to a service to cope with increased load, scale back as load decreases
- AD integration – can create/delete AD objects OOtB
- New ‘reconfigure states’ to enable triggering other workflows
- Enhanced update API
- Migration improvements
- LDAP support
- Scale in/out for XaaS components
- Enhanced LoadBalancing capability
- IPAM framework extended
- Re-assign managed VMs
- Azure endpoint support
- Container management (container host, and containers)
- ServiceNow integration
vSphere 6.5 – David Burgess
The most interesting section to me as I’ve not really had chance to look at it yet, was this section on what’s new with vSphere 6.5.
- The VCSA is now the preferred version of vCenter, and new features will be added to it, not to the Windows version.
- VCSA exclusive features today:
- Native HA capability
- Integrated VMware Update Manager
- Improved Appliance Management
- Native Backup/Restore
- Uses PhotonOS rather than SuSE.
- VCSA Deployment
- The installer has support for Windows, Mac and Linux
- Deploys the OVF, then configures as a second step
- Options to Install/Upgrade/Migrate/Restore
- Can migrate from Windows, 5.5 or 6.0 to 6.5
- VCSA has an HTML5 management interface for the appliance itself
- VCSA HA – Active/Passive with a witness VM (3 VMs in total)
- HTML5 Web Client
- Now fully supported by VMware
- ~90% feature parity with the flash web client
- Performance is much better – less resource intensive (applies to Windows vCenter too)
- Host profiles are much improved
- Auto-Deploy – there is now a graphical image builder (rather than just the PowerCLI cmdlets), and it supports IPv6 and UEFI
vSphere API & CLI
- New REST API for VM management
- Choice of SDKs and automation tools – multiple languages, plus PowerCLI and DCLI
- Enhanced Logging
- VM Encryption – both disk and vmotion traffic
- Uses an external Key Management Server
- Can have a non-crypto admin user that can do most admin but not access console, read/write data etc
- Encrypted vMotion – can be set to Disabled/Opportunistic/Required
- UEFI Secure boot (for the hypervisor) – needs signed drivers
- VM Secure boot (UEFI secure boot for the VM)
Application Availability and Resource Management
- Proactive HA – detect hardware degraded conditions, vMotion guests off host. Hardware OEM participation is required, eg Dell OpenManage, HP Insight Manager
- HA Orchestrated Restart – VM-to-VM dependency checks (this has validation checks to prevent dependency loops for example)
- 5 Restart Priorities (up from 3 in previous versions)
- HA Admission Control – this has been updated to simplify
- Chooseter Failures To Tolerate
- Based on % of resources reserved
- Automatic calculations, rather than manual reconfiguration whenever you add/remove a host
- Overrides are possible
- New DRS options
- Even distribution (helps to balance out the cluster even if it’s not required for performance reasons
- Can base on consumed memory rather than active memory
- Takes into account CPU overcommitment
- New CPU models and architectures are now supported
- LUN limit has been increased to 512
- Supports vRDMA (virtualised Remote Direct Memory Access) via a paravirtual driver.
The day then concluded with a demo of VRA with Codestream.
I felt it was a worthwhile event, and it was great to meet a few new people. Thanks again to the VMware UK TAM team for running it.