How to Determine Mac OS version from Command Line

There are a few ways to determine what version of OSX you are on.

In the GUI, you can click the on Apple menu () at the top left of your screen, and choose About This Mac. The version of OS X will be printed underneath the large bold Mac OS X title. Clicking on the Version XYZ text will reveal the Build number.

If you’re writing a script though, you may wish to access this programmatically. To do this, type:

$ system_profiler SPSoftwareDataType
Software:

    System Software Overview:

      System Version: macOS 10.12.6 (16G29)
      Kernel Version: Darwin 16.7.0
      Boot Volume: Macintosh HD
      Boot Mode: Normal
      Computer Name: Srini’s MacBook Pro
      User Name: Srini Karlekar (skarlekar)
      Secure Virtual Memory: Enabled
      System Integrity Protection: Enabled
      Time since boot: 21 days 8:30

$

Troubleshooting python-jose Installation

Introduction

The JavaScript Object Signing and Encryption (JOSE) technologies – JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Key (JWK), and JSON Web Algorithms (JWA) – collectively can be used to encrypt and/or sign content using a variety of algorithms.

Installation

$ pip install python-jose

Troubleshooting

When I tried to install python-jose on my Mac running Mac OS Sierra (MacOS 10.12.6), I was getting a RuntimeError: autoconf error.

To get around this issue, I had to run the following command.

$ xcode-select --install

This command pops up a window asking your permission to install xcode compiler. Once the compiler install went through, the pip install worked without issue.

Full details of error below:

(newpython3) srini-macbookpro:aws-auth0-auth skarlekar$ (newpython3) srini-macbookpro:aws-auth0-auth skarlekar$ pip install python-jose The directory ‘/Users/skarlekar/Library/Caches/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. The directory ‘/Users/skarlekar/Library/Caches/pip’ or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. Collecting python-jose
Downloading python_jose-1.4.0-py2.py3-none-any.whl Collecting ecdsa<1.0 (from python-jose) Downloading ecdsa-0.13-py2.py3-none-any.whl (86kB) 100% |████████████████████████████████| 92kB 3.6MB/s Collecting six<2.0 (from python-jose) Downloading six-1.11.0-py2.py3-none-any.whl Collecting future<1.0 (from python-jose) Downloading future-0.16.0.tar.gz (824kB) 100% |████████████████████████████████| 829kB 941kB/s Collecting pycrypto<2.7.0,>=2.6.0 (from python-jose) Downloading pycrypto-2.6.1.tar.gz (446kB) 100% |████████████████████████████████| 450kB 1.6MB/s Installing collected packages: ecdsa, six, future, pycrypto, python-jose
Running setup.py install for future … done Running setup.py install for pycrypto … error Complete output from command /Users/skarlekar/anaconda/envs/newpython3/bin/python -u -c “import setuptools, tokenize;file=’/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py’;f=getattr(tokenize, ‘open’, open)(file);code=f.read().replace(‘\r\n’, ‘\n’);f.close();exec(compile(code, file, ‘exec’))” install –record /var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-y3p2qipv-record/install-record.txt –single-version-externally-managed –compile: running install running build running build_py creating build creating build/lib.macosx-10.7-x86_64-3.6 creating build/lib.macosx-10.7-x86_64-3.6/Crypto copying lib/Crypto/init.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto copying lib/Crypto/pct_warnings.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto creating build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash copying lib/Crypto/Hash/init.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash copying lib/Crypto/Hash/hashalgo.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash … … (snipped for brevity) … Skipping optional fixer: buffer Skipping optional fixer: idioms Skipping optional fixer: set_literal Skipping optional fixer: ws_comma running build_ext running build_configure checking for gcc… gcc checking whether the C compiler works… yes checking for C compiler default output file name… a.out checking for suffix of executables… checking whether we are cross compiling… configure: error: in /private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use
–host’. See `config.log’ for more details Traceback (most recent call last): File ““, line 1, in File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 456, in core.setup(**kw) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/core.py”, line 148, in setup dist.run_commands() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 955, in run_commands self.run_command(cmd) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/site-packages/setuptools/command/install.py”, line 61, in run return orig.install.run(self) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/command/install.py”, line 545, in run self.run_command(‘build’) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/command/build.py”, line 135, in run self.run_command(cmd_name) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 251, in run self.run_command(cmd_name) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 278, in run raise RuntimeError(“autoconf error”) RuntimeError: autoconf error
-—————————————

AWS Ethical Hacking Tutorial

Introduction

I found this awesome brisk, one-hour, self-paced tutorial on finding & avoiding security gotchas in your AWS setup. I learned a lot going through it. Highly recommend this for your AWS teams. Go to: flaws

Summary

Through a series of levels you’ll learn about common mistakes and gotchas when using Amazon Web Services (AWS). The author (Scott Piper) provides a series of hints that will teach you how to discover the information you’ll need to further yourselves to the next level. If you don’t want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. At the start of each level, you’ll learn how to avoid the problem the previous level exhibited.

Lesson learned

It is common to give people and entities read-only permissions such as the SecurityAudit policy. The ability to read your own and other’s IAM policies can really help an attacker figure out what exists in your environment and look for weaknesses and mistakes.

Avoiding this mistake

Don’t hand out any permissions liberally, even permissions that only let you read meta-data or know what your permissions are.

Managing FaaS Services Deployed Across Different Cloud Providers

Introduction

Sometimes you have to pick the best of the breed solution for different needs. This is true among the services provided by the different cloud providers as well. For instance, when it comes to cognitive services, Google, Amazon, and Microsoft rule the roost. Even among them, Google does natural-language translation, landmark recogintion, and text extraction from images, and content-based search the best. Amazon is the leader in facial recognition. Similarly, I found Microsoft’s image labeling the best among the breed.

No longer do you have to settle with one cloud provider to solve your needs. With frameworks such as the Serverless Framework, you can develop services across various providers, deploy and manage them in a cloud-agnostic fashion. With a single environment, you can develop, test and deploy to most of the big cloud providers without having to worry about their idiosyncrasies and react to cross-cloud events.

Demonstration

This is a simple tutorial to demonstrate how to deploy multiple services on different cloud providers using the Serverless Framework.

More specifically, this tutorial walks you through deploying an image detection service on Google Cloud Platform (GCP) and managing it using a proxy service running on Amazon Web Service. Both the services on either platform is 100% serverless.

The image detection service running on GCP uses Google’s FaaS solution viz., Cloud Functions and the proxy running on AWS uses Amazon’s FaaS solution viz., Lambda.

In a typical scenario, you will use a service such as this to detect the contents of an image uploaded to a S3 bucket and take appropriate actions based on the result. For instance, you could use it to blur/reject the image based on the vulgarity or get the image labels and chain it to other services that will translate the labels to multiple languages to cater to your customer needs.

Code Repository can be found here.

Setup

Setup Amazon AWS

  1. Sign into your AWS account or sign-up for one.

  2. Setup your AWS credentials by following the instructions from here.

Install node.js and Serverless framework

Serverless framework is a node.js application. To use Serverless framework and run the tutorial you need to install node.js. Follow the instructions from Serverless website to install both node.js and the Serverless framework.

Ensure your Serverless framework is operational using the following:

$ serverless --version

Testing your Serverless Setup

Now that you have setup AWS, it is time to test your Serverless setup by creating a mock function using the Serverless framework.

Create a test directory. In the test directory, create a Lambda function from the default template as follows:

$ mkdir sls-tester
$ cd sls-tester
$ sls create --template aws-python --name sls-test

This should create two files in the current directory:

serverless.yml

handler.py

The serverless.yml declares a sample service and a function. The handler.py returns a message stating that your function executed successfully.

To deploy the function, simply type:

$ sls deploy --verbose

This should deploy the function. The verbose option provides extra information.

To test your function, type:

$ sls invoke --function hello

If you get the following message, your Serverless setup is working.

{
    "body": "{\"input\": {}, \"message\": \"Go Serverless v1.0! Your function executed successfully!\"}",
    "statusCode": 200
}

To check the logs for your function, type:

$ sls logs -f hello

To keep a continuous check of the logs for your function, type:

$ sls logs -f hello -t

Setup Google Cloud

  1. Sign up for a new Google account at http://accounts.google.com. If you already have an account you can skip this step.
  2. Sign up for a Google Cloud trial at http://console.cloud.google.com/start. If you already have Google Cloud privileges on your Google account, you can skip this step.
  3. Create a new project and call it serverless-project (or a name of your choice).
  4. Select Credentials in API & Services section of the Google Cloud console.
  5. Under Create Credentials, create a new Service Account Key. Download the JSON key file to a secure place as you will need that in subsequent steps.
  6. In the API & Services dashboard, enable Google Cloud Vision API, Google Cloud Functions API, Google Cloud Deployment Manager API, Google Cloud Storage & Stackdriver Logging.

Image Detector

gcp-label-image

The gcp-label-image is the service that will deployed on GCP. It is a node.js based service that takes an image url passed through the HTTP request and sends it to Google Vision to detect the contents of the image and return a list of tags describing the content of the image.

The image URL should be passed as an HTTP parameter named imageUri. If this parameter is missing the service uses a default image to detect and return the contents.

Deploying the Image Detector Service

  1. Location: Go to the gcp-label-image subdirectory in the folder where you cloned the Git repository.
  2. Project: Replace the your-gcp-project-id in the serverless.yml file with your Google Cloud project id.
  3. Credentials: Replace the /path/to/your/gcp/credentials/json in the serverless.yml file with the path to the JSON credentials that you saved in the GCP setup.
  4. Deploy: In the service home directory, run the following command to deploy the detectLabel service on GCP. Make a note of the endpoint created. This endpoint is a URL that will end with detect as shown below. shell
    $ sls deploy --verbose
    ...
    Deployed functions
    detectLabel
    https://your-region-your-project-id.cloudfunctions.net/detect
    ...
  5. Verify: You can check your Google Cloud Functions dashboard to ensure that your Cloud Function is deployed.
  6. Invoke@theTerminal: Invoke the function detectLabel as follows: shell
    $ sls invoke -f detectLabel
    Serverless: ekvy90t28px8 Image Results: landmark historic site sky tourist attraction ancient rome ancient history ancient roman architecture medieval architecture night building
    $
  7. Invoke@theBrowser: Copy and paste the URL from the result of your sls deploy into the browser and add the imageUri parameter as follows: Far
    https://your-region-your-project-id.cloudfunctions.net/detect?imageUri=https://goo.gl/27gclq

Image Detector Proxy

aws-gcp-proxy

The aws-gcp-proxy is the service that will be deployed on AWS. It is a Python-based service that will take an image URL passed through the HTTP request and send it to the Cloud Function deployed on GCP.

In a typical use, you will use it to detect the content of an image uploaded to a S3 bucket and take appropriate actions based on the result. For instance, you could use it to blur/reject the image based on the vulgarity or get the image label and chain it to another service that will translate the labels to multiple languages to cater to your customer needs.

The image URL should be passed as an HTTP parameter named imageUri. If this parameter is missing the service uses a default image URL to detect and return the contents.

Deploying the Image Detector Proxy Service

  1. Location: Go to the aws-gcp-proxy subdirectory in the folder where you cloned the Git repository.
  2. Environment Variable: Edit the setEnv.sh file to point update the GFUNC_URL to point to your image detector service running on GCP.
  3. Deploy: In the service home directory, run the following command to deploy the proxy service. Make a note of the AWS Gateway endpoint created. You will use this endpoint to test your service. shell
    $ sls deploy -v
    ...
    GET - https://urmileagemaydiffer.execute-api.us-east-1.amazonaws.com/dev/detect
    ...
  4. Verify: You can check your AWS Lambda dashboard to ensure that the Lambda function was created and the environment variable is being passed.
  5. Invoke: Copy and paste the AWS Gateway API URL into the browser and add the imageUri parameter as follows: Far
    https://urmileagemaydiffer.execute-api.us-east-1.amazonaws.com/dev/detect?imageUri=https://goo.gl/27gclq

Conclusion

Serverless Framework makes it painless to deploy services across multiple cloud providers without having to deal with the idiosyncrasies of various providers allowing you to focus on your application. Additionally, the framework allows you to use the right provider for the right service, cuts the time spent on deployment while allowing you to manage the code and infrastructure across multiple providers.