Category Archives: Admin

CI/CD Environment for A Smaller Project

Advantages of Continuous Integration (CI) and Continuous Delivery (CD) are obvious even for small projects with few contributors and are easily achievable with help of  free cloud tools – like for instance with mighty combo of Github plus Travis. But what if we want to achieve similarly convenient  environment inside of our private network, available only to our internal teams. Luckily open source is here again to help us with another great tool – GitLab –  GitLab  is a similar platform to GitHub, but the code is open source and we can easily install it in our environment. In this article I’ll summarize my experiences and guidelines how to build convenient environment for a small project  with automatic testing and deployment. Continue reading CI/CD Environment for A Smaller Project

What Is This Weird File Name in My Samba Share?

In IT there are big things and there are small things. Some small things can be pretty annoying and they seem to stay here forever.  One of these annoying little things is difference between restrictions for file names in Windows versus  unix/linux (others are for instance legacy character encodings, http proxy support, these things has teased me many times in past).  Have you ever seen strange file name like W3NEM5~I on shared disc instead of meaningful file name, that you expected? If so and you’re interested what’s going on continue reading. Continue reading What Is This Weird File Name in My Samba Share?

Linux Desktop for 2017 and on

Screenshot from 2017-10-08 08-38-43As Canonical has announced the end of Unity desktop I thought it’s time to look again around at Linux desktops. In past years I have been using mainly Gnome 2 (or Mate recently), XFCE, Cinnamon and Unity (yes I did and experience was after all rather positive). I’ve tried Gnome 3 few years ago, but really never gave it longer try and never really find attraction for KDE. So in this article I’ll look a bit at those desktops again and especially at the recent Gnome Shell and it’s customization to my needs (which is indeed based on very individual preferences). Continue reading Linux Desktop for 2017 and on

Beware of sync option in mount

By default mount is using async option, which means that  write operations do not wait for final confirmation from the device – they are stored in disc cache and writes are done latter, optimized by disc firmware.  However you can set sync option manually ( -o sync), then write operations are synchronous, meaning each block write has to wait for confirmation that it’s physically written to the  disc and there is no optimization available.  This can significantly slow down write speed, of which I convinced myself just recently – I backuped  some data to external 2.5″ USB 3.0 HD – slowdown in this case was almost 1000x  –  (70kb/s vs 60MB/s   measured by rsync --progress).  How it happened that disc was mounted with sync option? I actually use usbmount to auto-mount disks and it has sync as default mount option (fortunately can be changed in it’s configuration). So conclusion is – don’t use sync  option unless you know exactly what you are doing and if write speed is suspiciously slow check mount options.

The Splendors and Miseries of CaaS – Experiences with Openshift3

Container as a Service (CaaS) is increasingly popular cloud service (usually categorized under Platform as a Service family of cloud services). It can provide easy ways how to deploy web applications leveraging Linux container technologies usually most popular Docker containers.  Recent addition to this family is Openshift v3 from RedHat.   Openshift is available as an open source  software (Openshift Origin) or as a hosted service (OpenShift Online).  I already used previous version of Openshift service (v2), as described in my previous article. In this article I’ll share my recent experiences with Openshift v3 service (also called NextGen). Continue reading The Splendors and Miseries of CaaS – Experiences with Openshift3

Ethereum local playground

In past article I’ve talked generally about blockchain technologies, in this article we will look into Ethereum from user perspective. We will build local playground, where we can test many functions of Ethereum(Ethers transfers, using and writing smart contracts and more) without spending real Ethers (and thus real money). This guide in intended for users with Linux OS. Continue reading Ethereum local playground

NetworkManager Script to Set HTTP Proxy

While Gnome and it’s derivatives support automatic proxy detection,  it do not work well for all programs, particularly for command line programs.   I’ve found  that using simple script in /etc/NetworkManager/dispatcher.d works better for me, which sets and unsets fixed proxy works better.   NM dispatcher scripts are run each time network connections change (network up, down, VPN connect etc.) and received two parameters ( interface name and status) and bunch of environment variables. Continue reading NetworkManager Script to Set HTTP Proxy

Openshift – Second Thoughts

Openshift Online still remains one of most generous Paas offerings on the market. With 3 free containers it’s really good bargain. Recently I’ve modified  couple of my older applications to run in Openshift (myplaces and iching) to run in Openshift.

Previously I’ve created pretty standard and simple Flask application and deployed it on Openshift. The process was pretty straightforward as described in this article. However now situation was different, because both applications are special. Continue reading Openshift – Second Thoughts

Do We Trust Cloud Storage For Privacy?

With more generic offerings from  cloud storage providers –  up to 50GB free,   cloud storage is tempting alternative to store some of our data. I have some data, which I really do not want to loose. I already have them stored on several devices, however additional copy in cloud could help.  But how much I can trust cloud providers to keep my data private, even from their own employees.  Not that I have something super secret, but somehow I do not like idea, that some bored sysadmin, will be browsing my family photos.  Or provider  use my photos for some machine learning algorithms.

Main providers like Dropbox, Google do use some encryption, however they control  encryption keys, so they can theoretically access your data any time and in worst case provide them to third parties – like government agencies.   From what I have been looking around only few providers like Mega or SpiderOak  offer privacy  by design – which means  all encryption is done on client and they should not have any access to your keys (zero knowledge).   However how much we can trust that their implementation is flawless or that there are not intentional back-doors left? There has been some concerns about Mega security couple years ago,  but no major issues appeared since then.

So rather then trusting those guys fully, why not to take additional step and also encrypt our data, before sending them to cloud?  Additional encryption will not cost us much CPU time on current hardware (from tests – 11% of one core of old AMD CPU) and will not slow down transfers, because they are rather limited by Internet connection bandwidth.  And on Linux we have quite few quality encryption  tools like gpg or openssl, which can be relatively easily integrated into our backup/restore chains. In the rest of this article I’ll describe my PoC shell script, that backs up/ restores  whole directory to MEGA, while providing additional encryption / decryption on client side.  Continue reading Do We Trust Cloud Storage For Privacy?