Archives For Programming

powershell on macos

July 4, 2017

I wrote in an earlier post about easily installing multiple applications on macOS. One of those applications interestingly enough is Microsoft’s PowerShell. Microsoft has open sourced PowerShell and made it available on GitHub (https://github.com/PowerShell/PowerShell). From GitHub you can either grab the source and build it yourself, download and install a pre-built binary, or both. I chose to just grab the binary and run with it as I have enough projects to keep me occupied as it is. The current PSVersion is 6.0.0-beta, and it’s shown running on macOS 10.12.5.

I start PowerShell for Mac from the ITerm2 command line, and from there I can look about PowerShell all I want. While the shell appears feature complete as far as syntax is concerned, PowerShell for Mac is missing considerable core .Net functionality when compared to the versions that run on Windows. As one small example shows below, Get-PSProvider doesn’t show the provider for the registry, as there is no equivalent (at all) on macOS. While it’s nice to have the same shell running across multiple platforms, as bash does, PowerShell for Mac and Linux isn’t going to be nearly as useful as it is on Windows if you want full Windows functionality on another OS. Any PowerShell scripts that are developed to take heavy advantage of Windows OS functionality are going to fail pretty hard on both the Mac and Linux, just to give you fair warning.

By the way, two comments on PowerShell help:

  1. If you decide to update PowerShell’s help, then run PowerShell as sudo before running Update-Help, or the updates will fail.
  2. The graphical view of help (via -ShowWindow) isn’t implemented and won’t work.

One tool not provided by the PowerShell project is the PowerShell ISE (Integrated Scripting Environment). The ISE is bundled with every copy of Windows these days, and is a powerful way to write and debug PowerShell scripts of any complexity on Windows. For the Mac the next best tool to use is Visual Studio Code with the PowerShell v1.4.1 extension (see below). You get full syntax highlighting and support as well as a split screen with code at the top and a PowerShell prompt at the bottom. The only major feature missing in this setup is the help section that is displayed to the right (by default) in the native ISE.

PowerShell for Mac and Visual Studio Code for Mac are an interesting counterpoint to Windows Subsystem for Linux on Windows 10. For those folk who like to “swim” in the lower regions of coding and operating systems, we’re living in a golden era.

There has been talk for some time about how Apple devices running iOS are contenders for replacing standard Intel architecture computers, such as MacBook Pros. Since I have a number of Apple devices, I thought I’d install Geekbench 4 (version 4.1) and run it across three of my Apple devices. I’ve put the results in a simple table below, with the results in the first three rows.

MBP mid-2015 iPhone 7 Plus iPad Pro 2016
CPU Single-Core 4462 3457 3017
CPU Multi-Core 16005 5872 5082
Compute 38117 12296 14764
Processor Intel Core i7 Apple A10 Fusion Apple A9x
Max Frequency 2.8 GHz 2.34 GHz 2.26 GHz
OS macOS 10.12.5 iOS 10.3.2 iOS 10.3.2

The MBP I own is a 15″ Retina MBP with 16GB of memory and the 2.8GHz quad-core i7. I wasn’t surprised to see the MBP be the leader across the board, particularly in multi-core scoring. The MBP is certainly the brawniest of the three with its Intel processor and eight times the memory over both the iPhone and iPad. Keep in mind that the MBP is the oldest of the three devices.

What I found rather interesting is the GPU-based Compute score. The iOS version of Geekbench uses Metal, the graphical framework that’s a part of iOS. Geekbench on the MBP uses OpenCL and because I’m too cheap to buy a copy, the built-in Iris Pro on the i7 processor was used instead of the beefier AMD Radeon R9 M370X. So even though I’m using the “lesser” graphics processor and “poorer” graphics software framework, the MBP still scored a solid two to three times faster than either iOS device. Of further note is the sizable performance lead of the iPad over the iPhone, even though the iPhone’s CPU is clocked faster and it’s using a more current Apple SoC.

So, am I ready to trade in the MBP for either iOS device? It all depends on the use case.

For general uses involving reading content and typing, I could easily switch to the iPad Pro. I use it with a Logitech keyboard-and-cover in landscape mode, which, when attached to the iPad using the Smart Connector gives me a decent keyboard with back-lit keys. It’s not as efficient and comfortable as the MBP keyboard, but it’s more than serviceable especially over a period of hours. I can do writing and other types of textual creation, as well as fairly sophisticated graphical content creation and photo/video post processing. There are, however, limits to the iPad Pro.

For the ultimate web experience I prefer the MBP and my selection of browsers, which includes Chrome, Firefox, and Vivaldi. I am not a fan of Safari on either iOS or macOS, and I don’t think I ever will be. What makes web browsing on iOS truly annoying is Apple’s insistence of forcing every other browser to use the Apple web engine used by iOS Safari; it is buggy and poorly performant.

When I need to develop software I much prefer the MBP. When I need to do light code editing on the iPad Pro I use Textastic with Working Copy. I have iOS Terminus that allows me to ssh into machines around my home running Linux and macOS (nothing like that for Windows, unfortunately). Under ssh I tend to use vim with extensive vim customizations and colorizations. And I can use scp and git to move things around that need moving. So the iPad Pro makes a pretty decent work platform when I don’t want to fire up the MBP, especially when I need to put it down due to interruptions.

I haven’t even mentioned the iPhone, but it’s decent enough that it can fill in for the iPad when all I can carry with me is just the iPhone. I use a Microsoft Folding Bluetooth keyboard to type on, and I have an SDHC to Lightening card reader for reading JPEG and RAW files produced by my Olympus cameras. The same apps I would use on my iPad to post process work just fine on the iPhone 7 Plus. And when I don’t want to, or can’t have, my Olympus camera, then the iPhone 7 Plus camera is just fine.

Finally, there’s the truly heavy lifting that the MBP is called upon to do. For example, I have a number of Linux virtual machines I power up to perform testing and development in parallel with work on the MBP. I use Xcode to develop iOS applications, as well as Android Studio to develop Android applications. If I want to develop using a full Javascript stack starting with node.js, then the MBP is the only way to go. If I want to develop in Java or Python or Go or Rust, only the MBP allows me to do that.

And the 15″ screen on the MBP is the easiest of all the screens to read, which is important due to my poor eyesight (20/700 and near sighted).

There is no easy answer to the original question, except to say it all depends. As long as I can choose which to use for which task, I will choose all three based on the work at hand that needs to be done.

But I am impressed with what the Apple SoCs can accomplish. While the MBP rules them all, for single core scoring all three devices are fairly close together, compared to multi-core and compute. This bodes well for Apple’s continued evolution of its ARM-based processors, and if I were Intel, I really would be looking over my shoulder at ARM in general and Apple in particular.

the fallacy of safe code

January 31, 2016
Ken Thompson

Ken Thompson

In the last post I wrote about a simple way to help create so-called safe code, or code that executes correctly. I wrote that code in C++ using gcc/g++ (the GNU Compiler Collection). What was implied, but never explicitly addressed, was my use of the gcc standard libraries along with the compiler, and the implicit trust I put into both not to do evil with my small application. It’s that trust I want to address in this post. I’ll start that discussion by quoting from Ken Thompson (co-creator of Unix and the C language) and his 1984 ACM Award lecture, “Reflections on Trusting Trust“:

You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. (emphasis mine)

The entire paper is devoted to the Thompson Hack, a method he came up with for creating what he (and later, many others) thought at the time was an undetectable way to create malicious versions of critical applications by compromising the C compiler itself in an undetectable manner. For an example he spoke of adding a backdoor to login if the compiler detected it was compiling login. Although given in 1984, it was actually describing what had been discovered a full decade earlier in 1974. This was pretty interesting and prescient, given today’s revelations. But lest you think that it couldn’t get worse, it actually could. As Ken Thompson continues on in the same paragraph:

In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

The comment about microcode is telling; it strikes at the heart of computing, the computer hardware itself. In 2008 the IEEE published “The Hunt for the Kill Switch,” where the authors detailed some of the ways they thought our adversaries could cripple our defense systems (if they hadn’t already) because of hardware kill switches or back doors built into the chips the DoD was buying (they brought up the constantly troubled F-35 as a prime example). Who would put such things into chips? The very people we’ve off-shored chip manufacturing to, primarily the Chinese. Since the 1980s the commercial US chip makers have been pushing chip fabrication and packaging off shore to save money. And the south Asian countries, primarily the Chinese, have been falling all over themselves to oblige. The response at the time was Darpa’s Trusted Integrated Circuits (TRUST). Whether that has had any positive effects has yet to be seen. I personally have my doubts…

Are you sufficiently paranoid yet? Let’s dial that paranoia back a bit. Let’s come back up the rabbit hole a bit into the realm of mere software. Referring back to the Thompson Hack, David A. Wheeler countered with a solution to the Hack titled “Countering Trusting Trust through Diverse Double-Compiling,” in which, you guessed it, Wheeler provides a way to protect yourself from the Hack. The key to Wheeler’s solution is to use a second, trusted, compiler. While the paper is great about the technique, the question about where to get the second trusted compiler is never satisfactorily answered.

But running untrusted code doesn’t have to get this convoluted. You can just pick a commonly used library or tool to use with your system. Consider two of the more egregious errors that led to security lapses: Shellshock and Heartbleed, both disclosed in 2014. Shellshock came about because of buggy code checked into the bash shell sources back in 1992, during a much more innocent period. The web came along and many developers made the decision to use Bash as part of a web server’s execution backend, opening up the Bash remote execution hole. Heartbleed, based on a bug introduced into OpenSSL sometime in 2012, was so devastating because it allowed a MITM attack to grab the encrypted key of digital certificates used to authenticate servers and encrypt traffic between said servers and users. It meant that web servers needed not only to patch their systems, but probably re-apply for all new certificates. Just in case. What led to this was too much code written over too long a period of time and maintained by too few developers paid too little. The folks who wanted something for nothing didn’t bother to look at the open source, thinking someone else had already done the critical vetting. Which just goes to prove the fallacy of Linus’ Law.

I’ve made myself so depressed over reading these types of reports over the last ten to fifteen years that I’ve seriously considered just selling all my computers and such and going to live off the grid. But I’m way too old for that now. Running from the problems means the bad people exploiting all of this win. I hate when that happens. I may not be able to stop them, but I can certainly do my part to slow them down as much as possible; keep my mouth shut while paying attention to what gets published around me in these areas. And keep on, keeping on.

I’ve been adding capabilities to my Mac Mini since powering it up the middle of this past week. After performing all the necessary OS updates, I installed Xcode 6.2.2. Xcode comes with a series of tools, one of which will be shown later in this article.Screen Shot 2015-01-23 at 8.01.20 PMThe installation of Xcode is important to the clean installation of two Java-based tools later on in this post, Android Studio and IntelliJ IDE 14.

System Specifics

  • Mac mini Server (Late 2012)
  • Processor 2.3 GHz Intel Core i7
  • Memory 4GB
  • Two, 2 TB HDD

Preconditions

  • Mac OS X Yosemite Version 10.10.1 (update free from App Store)
  • Mac Server 4.0.3 (update free from App Store)
  • Xcode 6.2.2 (installed free from App Store)
  • Apple’s Java (Java 6) is not installed

Software Installed

Installation of Latest Java

Download the latest version from the Oracle Technology Network. You should be able to find the latest under New Downloads in the upper right corner of the web page. For this example I’m installing the 64-bit Java 8 update 31 (jdk-8u31-macosx-x64.dmg). After download start the installation by clicking or double clicking, depending on where you find the dmg file, to launch the installer.
Screen Shot 2015-01-23 at 9.38.43 PMSimply follow the directions and double click the icon (the transparent yellow square in the open box) to start the installation of Java. The first screen presented to you will be an introduction. There’s very little there, so click through.
Screen Shot 2015-01-23 at 9.39.11 PMScreen Shot 2015-01-23 at 9.39.29 PM
Screen Shot 2015-01-23 at 9.39.42 PMWhen you click to install, it will ask you for your login account password, so type it in to continue.
Screen Shot 2015-01-23 at 9.40.16 PMThe installation is a simple progress bar. When it reaches the end all the software will be installed on your system. The last dialog is the summary with a close button. Once installed you can add a JAVA_HOME variable if you like with an “export JAVA_HOME=$(/usr/libexec/java_home)” in your .profile, but I don’t believe it’s necessary. You certainly don’t need to edit $PATH, as there are links to java, javac, and jar in /usr/bin.

You can, if you also wish, download from the Oracle site the Java 8 demo examples. They’re all in a zip file. Once unzipped you’ll find all the demo and example applications that were once bundled with Java. Also included in this collection are example JavaFX applications which are well work looking at. JavaFX is so much better for UI development than the older and much worse looking Swing.

Android Studio and IntelliJ 14 IDE Pre-Installation Preparation

The installation for both is very similar, which they should be, considering that Android Studio is based on IntelliJ IDE. For both download their respective dmg files, double click them to open them in Finder, then drag them into the Application folder. Once in the Application folder you need to make a change to their respective plist files. Find the applications in the Application folder, and right click on them. Select Show Package Contents from the menu.

Screen Shot 2015-01-23 at 10.57.57 PM

When the Contents folder appears, click through to the contents where info.plist is located. Double clock on info.plist.Screen Shot 2015-01-23 at 11.05.26 PMScreen Shot 2015-01-23 at 11.07.52 PMWhen Xcode opens up the plist (you did install Xcode, did you not?) open up JVMOptions, then select JVMVersion and clock on the string to edit it. The string, as shipped, is “1.6*”. Note the asterisk. If the string is left this way you will be prompted by both applications to install Apple’s much older Java 6. Changing the string to “1.6+”, with a “+” replacing “*”, tells the application to use the version of Java installed on the machine. Save this file and then double click the application to finish installation configuration.

For Android Studio I chose to download the separate Android SDK and install it (for my purposes) under $HOME/Java. The SDK comes as a zip file, so unzipping it under $HOME/Java produces $HOME/Java/android-sdk-macosx.

Installing Android Studio 1.0.1

With everything properly prepared, we’re ready to finish Android Studio’s installation (and IntelliJ if you’re so inclined). I’m only going to show Android Studio because it’s the more complex due to the Android SDK, but other than that they’re nearly identical.Screen Shot 2015-01-22 at 10.24.08 PMScreen Shot 2015-01-22 at 10.24.46 PMScreen Shot 2015-01-22 at 10.29.23 PMScreen Shot 2015-01-22 at 10.29.36 PMScreen Shot 2015-01-22 at 10.30.12 PMMake sure you open both top-level menu items and agree to their licensing.Screen Shot 2015-01-22 at 10.40.26 PMYou’ll go through several minutes (or longer) of watching Android Studio download any Android SDK files it thinks it needs. This is just a typical capture of this process, and towards the end.Screen Shot 2015-01-22 at 10.42.29 PMWe’re finally done!Screen Shot 2015-01-22 at 10.42.47 PMAnd ready to start development. The installation of IntelliJ is essentially the same but shorter. I’ll spare you the details.

Notes to The Gentle Reader

  • If you think this is overly complicated, it’s not. It’s no worse (and some might argue even simpler) than installation on Linux and Windows. The only obscure piece I had to go looking for was the plist adjustment. This information is missing on both Google’s Android Studio and Jetbrain’s IntelliJ IDEA websites. You’re welcome.
  • This is an initial install. I have no idea what will happen when I update Java, and I will update Java as each version is released. You update for the bug fixes. Failure to update any software is an invitation to grief, which is so easy to avoid. If any problems crop up when an update occurs then I’ll post an update about it here.

There comes a time in any programmer’s life when they write logging functions. Sometimes it’s little more than std::cout <<, or sometimes it’s a full bore implementation along the lines of Boost.Log. Sometimes you want more than a print statement without the overhead of a Boost.Log implementation. What I’m about to list is that kind of logging function. It comes in just two files, a header and implementation, with a simple test file to try it out.

First, the code listings.

#ifndef IT_LOGGER_H
#define IT_LOGGER_H

#include <iostream>

namespace IT
{
    class Logger {
        public:
        enum Level {
            INFO,   // Everything.
            ERROR,  // An error in the code itself.
            FATAL   // What is caused by a runtime exception.
        };

        // Set Logger to an output stream. Default is std::cout.
        //
        explicit Logger(std::ostream &ostr);

        // Set the stream we want to log to. Default is std::cout.
        //
        static std::ostream &setStream(std::ostream *ostr = 0);

        // Set the reporting level. Default is INFO.
        //
        static void setLevel(const IT::Logger::Level = INFO);

        // The logging function. This is wrapped in the LOG macro below.
        //
        static void log(const Level, const std::string fileName, const int line, const std::string msg);
    };
}

// Generate a logger entry. Macros are evil, but this is a necessary evil.
//
#define LOG(level, string) \
    IT::Logger::log((IT::Logger::level), __FILE__, __LINE__, (string));

#endif

it_logger.h

#include <cctype>
#include <cstdlib>
#include <ctime>
#include <fstream>
#include <sstream>
#include <strings.h>

#include "it_logger.h"

// All the code in this anonymous namespace block is executed once and only once
// during application startup. This is a "silent" new for Logging to make sure
// it is properly configured and sane.
//
namespace
{
    // Check to see where logging output will go. Default is standard out.
    //
    // If the user defines the environmental variable ITLOGFILE, then that
    // will be the logging output. The file will be opened relative to the
    // home directory of the user running the application. For example, if
    // the user, using bash, performs 'export ITLOGFILE=foo.txt', then runs
    // the application, all logging will go to $HOME/foo.txt.
    //
    // Failures to open are silent, and the default on failure is standard out.
    //
    std::ostream* checkLoggingOutput(void) {
        std::ostream *logout = &std::cout;
        std::stringstream logfilename;
        char *home = getenv("HOME");
        char *env = getenv("ITLOGFILE");

        if ((home != NULL) && (env != NULL)) {
            logfilename << home << "/" << env;
            std::ofstream *ofs =
                new std::ofstream(
                    logfilename.str().c_str(), std::ofstream::out | std::ofstream::app);

            if (ofs != NULL && ofs->is_open()) {
                logout = ofs;
            }
        }

        return logout;
    }

    std::ostream *outPtr = checkLoggingOutput();

    // Check what logging level to use. Default is IT::Logger::ERROR.
    //
    // If the user defines the environmental variable ITLOGLEVEL, then that
    // will be the logging output. Levels are INFO, ERROR, and FATAL.
    // Logging level is set, using bash as an example,
    // by 'export ITLOGLEVEL=INFO' (if you want INFO level logging or higher).
    // The three levels are case insensitive (info is the same as INFO, etc).
    //
    // Failures due to misspellings are silent. Default level is ERROR.
    //
    const IT::Logger::Level checkLoggingLevel(void) {
        IT::Logger::Level level = IT::Logger::ERROR;
        char *env = getenv("ITLOGLEVEL");

        if (env != NULL) {
            if (strcasecmp(env, "INFO") == 0) {
                level = IT::Logger::INFO;
            }
            else if (strcasecmp(env, "ERROR") == 0) {
                level = IT::Logger::ERROR;
            }
            else if (strcasecmp(env, "FATAL") == 0) {
                level = IT::Logger::FATAL;
            }
        }

        return level;
    }

    IT::Logger::Level levelFilter = checkLoggingLevel();

    std::ostream &setStream(std::ostream *ostr) {
        outPtr = ostr != NULL ? ostr : &std::cout;
        return *ostr;
    }

    static const char* lname[] = {
        "-  - INFO  : ",
        "!!!! ERROR : ",
        "**** FATAL : "
    };

    const char* levelToString(const IT::Logger::Level level) {
        return (level >= 0 && level < sizeof(lname)) ? lname[level] : "UNKNOWN: ";
    }
}
// end anonymous namespace

IT::Logger::Logger(std::ostream &ostr) {
    setStream(&ostr);
}

std::ostream &IT::Logger::setStream(std::ostream *ostr) {
    return setStream(ostr);
}

void IT::Logger::setLevel(const IT::Logger::Level level) {
    ::levelFilter = level;
}

// A LOG macro is wrapped around this specific function call.
// The level is one of the three IT::Logger::Levels defined in the header.
// The filename is the source file name in which this was invoked via the macro.
// The line number is the source file line number, i.e. where you
// would expect to find when you open up the source file in a text editor.
// The message is the explicit message written with the macro.
// Logging is timestamped with the system's current local time.

void IT::Logger::log(
    const Level level,
    const std::string filename,
    const int line,
    const std::string message) {

    if (level >= ::levelFilter) {
        char atime[80];
        time_t rawtime = time(NULL);
        tm *curtime = localtime(&rawtime);
        strftime(atime, sizeof(atime), "%c %Z : ", curtime);
        *outPtr << levelToString(level) << atime;
        *outPtr << filename << " : " << line << " : " << message << std::endl;
    }
}

it_logger.cpp

#include "it_logger.h"

int main(int argv, char *argc[]) {
    std::cout << "Starting..." << std::endl;
    LOG(INFO, "Info logging.");
    LOG(ERROR, "Error logging.");
    LOG(FATAL, "Fatal logging.");

    IT::Logger::setLevel(IT::Logger::ERROR);
    std::cout << std::endl;

    LOG(INFO, "Info logging 2.");
    LOG(ERROR, "Error logging 2.");
    LOG(FATAL, "Fatal logging 2.");

    return 0;
}

logtest.cpp

Usage

Include the header file in the source you want to add logging; see logtest above for examples on how to use it in code. See the notes in the code for details about the environmental variables ITLOGLEVEL and ITLOGFILE for simple tuning and controlling output. Compile and then use. I’ve tested this on RHEL 5 and Ubuntu 14.10, both with g++.