On top of my desk there are to PC systems right now. A Windows 8.1 box which is capable to run “apps” in full screen mode 🙂 and the Lintop Ubuntu box. I searched for a solution which enables me to use just one keyboard/mouse but control both screens. I found the Synergy project. This is really a cool solution and worth the few dollars.
Watch this video to understand how it works.
Some points why this is great for me:
- If you are in full screen mode playing a shooter (Windows) -> The mouse will not jump to another box.
- But if you press “Esc”, so a mouse pointer will be shown -> you can jump to the other screen
- The game will not be minimized when you set “Don’t take foreground window on Windows servers”
- define smaller areas where to jump to another box and where not to jump
- Multiple screen support
- Copy&Paste clipboard from one box to the other
- MacOS, Linux and Windows support
A screenshot of my simple configuration:
In the last days I thought about which OS I would like to install on Lintop. My main desktop is Windows 8.1 so I decided to use Ubuntu 14.04.2 LTS desktop version. For sure there are a lot of distributions which could do the job and my decision is based on the following requirements:
I read about the Gnome vs Unity fight and there have been some comparisons between them like LinuxUser. But I go with Ubuntu/Unity just because I used it 2 years ago and liked it.
Go docker trusted registry!
I believe that performance tuning/analysis/testing is one of the most complex tasks in the IT world. I read articles from well known IT people who have been known for well founded statements but when they did performance tuning/analysis/testing they have been proven wrong.
I am planing a series of performance tuning/analysis/testing posts about storage performance with linux file systems, Docker, MySQL, MS SQL Server and more. So to avoid the common mistakes I searched the INTERNET for some scientific approaches.
Andrew Pruski wrote a nice article and Raj Jan wrote a book about this topic. Even this presentation shows the common mistakes. I decided to use these approaches and hopefully provide well founded posts.
I added some points to Andrews approach. I will call it 8PP (8 Phases of Performance tuning/analysis/testing) now, because I will often reference this approach.
AND Don’t forget: “Performance tuning/analysis/testing is a continues process“. What you consider to be optimal for your workload today may not be optimal tomorrow.
I will link a real example showing the 8PP soon.
8PP – The 8 Phases of Performance tuning/analysis/testing (Draft 1.3)
Phase 1 – Observation
- 1.1 Understand the problem/issue
- Talk to all responsible people if possible
- Is the problem/issue based on a real workload?
- Is the evaluation technique appropriate?
- 1.2 Define your universe
- If possible isolate the system as much as you can
- Make sure to write down exactly how your system/environment is build
- Firmware, OS, driver, application versions, etc…
- 1.3 Define and run basic baseline tests (CPU,MEM,NET,STORAGE)
- Define the basic tests and run them while the application is stopped
- Document the basic baseline tests
- Compare to older basic baseline tests if any are available
- 1.4 Describe the problem/issue in detail
- Document the symptoms of the problem/issue
- Document the system behavior (CPU,MEM,NETWORK,Storage) while the problem/issue arise
Phase 2 – Declaration of the end goal or issue
- Official declare the goal or issue
- Agree with all participants on this goal or issue
Phase 3 – Forming a hypothesis
- Based on observation and declaration form a hypothesis
Phase 4 – Define an appropriated method to test the hypothesis
- 4.1 don’t define too complex methods
- 4.2 choose … for testing the hypothesis
- the right workload
- the right metrics
- some metrics as key metrics
- the right level of details
- an efficient approach in terms of time and results
- a tool you fully understand
- 4.3 document the defined method and setup a test plan
Phase 5 – Testing the hypothesis
- 5.1 Run the test plan
- avoid or don’t test if other workloads are running
- run the test at least two times
- 5.2 save the results
Phase 6 – Analysis of results
- 6.2 Read and interpret all metrics
- understand all metrics
- compare metrics to basic/advanced baseline metrics
- is the result statistical correct?
- has sensitivity analysis been done?
- concentrate on key metrics
- 6.3 Visualize your data
- 6.4 “Strange” results means you need to go back to “Phase 4.2 or 1.1”
- 6.5 Present understandable graphics for your audience
Phase 7 – Conclusion
Is the goal or issue well defined? If not go back to “Phase 1.1”
- 7.1 Form a conclusion if and how the hypothesis achieved the goal or solved the issue!
- 7.2 Next Step
- Is the hypothesis true?
- if goal/issue is not achieved/solved, form a new hypothesis.
- Is the hypothesis false?
- Is there a dependency to something else?
- If the goal is achieved or issue solved
- Document everything! (You will need it in the future)
Phase 8 – Further research
- 8.1 If needed form a new goal/issue
- 8.2 Define and run advanced baseline tests for future analysis
- 8.3 If possible implement a continues approach to monitor the key metrics
The 8PP itself will change from time to time because performance tuning/analysis/testing will evolve.
Go docker Kitematic!
Today i received my new linux desktop based on ASRock Q1900m. I ordered this hardware because I need a linux environment which can run all day without draining hundreds of watts per hour.
So a new world is born: Lintop (Linux desktop)
Because I will run some minor performance tests with this machine it is important to define the exact hardware and firmware.
This Silvermont (Bay Trail-D) based system with the J1900 provides:
- SOC (System on Chip) architecture
- J1900 @2Ghz
- DRAM – DDR3L 1333
- Gen 7 Intel HD Graphics
- 6-port SATA AHCI Controller
- High Definition Audio Controller
|MS-Tech LC-01||µATX Slimline dekstop chassi||-||-||Chassi fan regulated by CPU temperature
|ASRock Q1900M||µATX Mainboard with Intel Q1900 CPU||1.50||-||Intel Speedstep - ON
CPU CStates - C7
XD-Bit - ON
Intel-vt - ON
|16GB Kit Crucial Ballistix Sport||DDR3-1600 DRAM (2x 8GB)||-||-||
|SanDisk X110 256GB||256 GB SATA SSD (6Gb/s)||X231600 ||running at 3GB/s
|ioScale 410 GB ||Fusion-IO (SanDisk) PCIe Flash drive||7.1.17||3.2.11||running at PCIe v2 x1 lane (Mainboard limit)
|Realtek 1Gbit/s (Onboard)||1 Gbit/s Network Adapter -RTL8111/8168/8411||rtl8168g-2_0.0.1||rtl8168g-2_0.0.1||
|Samsung SH224DB||CD/DVD drive||-||
After the legal notice (“Impressum“) which is more or less mandatory in Germany you should think about a backup of your WordPress installation. If you ever lost a self written document you may know how frustrating it can be to write the same text again.
1. Prepare QNAP for backup via FTP
I am using selfhost.de as a DynDns provider, which is needed because my Homeoffice Internet IP address may change from time to time. My Homeoffice router Netgear WNDR4000 is not able to update the selfhost DynDNS IP so I am using the QNAP TS-421 instead. You can choose any DynDNS provider which your router or Qnap supports.
Environment which is in place already:
- WordPress on AWS EC2 instance with Elastic IP
- AWS Route 53 with tfindelkind.com registered and pointing to Elastic IP (WordPress)
- Netgear Router with dynamic Internet IP address
- Selfthost.de DynDNS account with an registered DynDNS like tfindelkind.selfhost.eu
- QNAP up and running and a backup folder created like “tfindelkind-com”
Setup Netgear WNDR-4000 FTP port forwarding
- Login to the Netgear router
- switch to “ADVANCED”
- click on “Advanced Setup”
- Click on “Port Forwarding/Port Triggering”
- Choose “FTP” and enter the IP of your QNAP and click “Add”
Setup QNAP TS-421 for DDNS with selfhost.de
- Login to QNAP
- Click on “Control Panel”
- Click on “System Settings”
- Click on “Network”
- switch to DDNS Service
- Click on “Enable Dynamic DNS Service”
- Select “selfhost.de” as DDNS server
- Enter Username (It is NOT the Account Name for selfhost.de, see DynDNS setup)
- Enter password
- Enter Host Name: tfindelkind.selfhost.eu
- Click on “Apply” and the DDNS should be updated successfully
Setup QNAP FTP Server
- Login to QNAP
- Click on “Control Panel”
- Click on “Network Services”
- Click on “FTP”
- Click on “Enable FTP service”
- Set “Enable anonymous” to “No”
Setup FTP User
- Login to QNAP
- Click on “Control Panel”
- Click on “Privilege Settings”
- Click on “Users”
- Click on “Create”
- Enter Username “stupidftp”
- Enter password 2x
- Click on “Edit” at “Shared Folder Permission”
- Allow only RW access to the backup folder “tfindelkind-com”
- Click “Close”
- Click on “Edit” at “Edit Application Privilege”
- Allow only access to FTP Service
- Click “Close”
2. WordPress Backup with BackWPup
I use BackWPup because its free. And it provides the following features which are important for my use case: automatic/scheduled backups, backup over ftp (Qnap) and be able to backup everything which is needed to backup and restore the whole WordPress installation.
BackWPup allows other destinations for the backup like the Amazon S3 storage or Dropbox. These can be cheap alternatives if you don´t have a QNAP or FTP filer at home.
- WordPress 4.2.3 running on AWS EC2 instance (free tier)
- Plugins: AntiSpam Bee 2.6.7 , BackWPup 3.1.4 ,Twitter 1.1.0, Yoast SEO 2.3.2
Install the BackWPup Plugin
- If not already installed.
- Go to Plugins -> Add new -> enter “BackWPup” in the search field
- Click “Install now”
Create a backup job for BackWPup
After the plugin is installed there should be a new menu item on the left side. If you click it you should see the dashboard or you can just start to setup a backup job. In the next steps I create a backup job which should backup once a day at 22:00h the full WordPress installation.
- Click on “Add new job”
- Give your job a meaningful name “daily_full_backup”
- Check all Jobs tasks for a full backup of your site: “Database backup, File backup, WordPress XML export, Installed plugins list”
- Configure the destination: Choose “Backup to FTP”
- Switch to the “Schedule” tab
- Choose “with WordPress cron”
- Choose “basic”
- Choose “daily 22:00”
- Switch to the “To: FTP” tab
- Enter FTP server: “tfindelkind.selfhost.eu”
- Enter Username: “stupidftp”
- Enter password
- Folder to store files in “tfindelkind-com”
- Save changes
- You should run the job right now to make sure everything is working correct
You just created a full backup of your WordPress installation.
Go docker hub!
I already mentioned that this blog is based on the AWS Free Tier. In in the upcoming blogs I would like to visualize the environment I am working with. MS Visio is not an option because I don´t want to buy it. I picked OpenOffice Draw instead because its free and it seem to be possible to import the AWS Simple Icons (SVG) format.
1. Install Apache OpenOffice Draw
- I am using Windows 8.1 (Ubuntu will be installed tomorrow 🙂 so downloading and installing the executable did the job
2. Download the AWS Simple Icons
- Download the AWS Simple Icons (SVG/EPS) and extract the zip files into a folder you like
- I used “My Documents\AWS_Simple_Icons_svg_eps”
3. Import AWS Simple Icons into the Gallery/Theme
- Open Draw
- On the right side switch to “Gallery” (I use the German version so I guess the names)
- Click on “new topic”
- Type “AWS”
- switch to “Files”
- choose “SVG – Scalable Vector Graphics” as file type
- click on “File search” and choose the folder you used in step 2
- click on “add all”
Ready for use.
Go docker compose!
Since 1997 it is by law enforced in Germany that certain internet sites provide an “Impressum“. This was done to protect internet user and should help them to get information about the company or people who run the sites.
So in Germany you always should provide all needed details. I am not a lawyer and did my best to provide all details which are needed. Keep in mind that this is a personal blog and not for commercial use. But in doubt it could be that any of your personal data maybe logged, saved or used in all possible ways. So my advice. Browse as anonymously as technical possible. Disable cookies, use a sandbox browser, proxies or whatever helps to avoid any data footprint.
Go docker swarm.