Week of April 3rd

“The other day, I met a bear. A great big bear, a-way out there.”

As reported last week, I began to dip my toe into the wonderful world of Python.. I wasn’t able to complete the Core Python: Getting Started by Robert Smallshire and Austin Bingham Pluralsight course by last week’s end. So I had to do some extended learning over last weekend. Last weekend, I was able to finish the “Iteration and Iterables” module which I started last Friday and then spent the rest of the weekend with the module on “Classes” which was nothing short of a nightmare. I spent numerous hours on this module trying to debug my horrific code and rewatching this lessons in the module over and over again. This left me with the conclusion that I just simply don’t get object oriented programming and probably never will..

Ironically, a conclusion, I derived almost 25 years ago when I attended my last class at University at Albany which was in C++ Object Oriented programming. Fortunately, I escaped that one with a solid D- and was able to pass go and collect $200 and move on to the working world. So after languishing with Python — Classes, I was able to proceed with the final module on File IO and Resource Managements which seemed more straight forward and practical on Monday.

On Tuesday, life got a whole lot easier when I Installed Anaconda — Navigator. Up until this point I was writing my python scripts in TextWrangler Editor on the Mac which was completely inefficient.
Through Anaconda, I discovered Spider IDE which was like a breath of fresh air. No longer did I have to worry about aligned spaces, open and closed parenthesis, curly and square brackets. Now with the proper IDE environment I was able to begin my journey down the Pandas Jungle…

Here is what I did:

  1. Completed the course of Pandas Fundamentals
  2. Installed Anaconda Panda Python Module, SQL Lite
  3. Created Pandas/Python Scripts:
  4. Read in CSV file (Tate Museum Collection) and output to pickle file
  5. Read in JSON file write output to screen
  6. Traverse directories with multiple JSON files and write output to a file
  7. Perform iteration, aggregation, and filtering (transformation)
  8. Created indexes on data from CSV file for faster retrieval or data
  9. Read data source (Tate Museum Collection) and output data to Excel Spreadsheets, with multiple columns, multiple sheets, and with colored columns options
  10. Connects to RDBMS using SQLAlchemy module (Used SQL Lite Database as POC) which creates a table and writes data to the table from a data source (pickle file)
  11. Create JSON file output from a data source (pickle file)
  12. Create graph using matplotlib.pyplot and matplotlib modules. See attachment.

**Bonus Points ** Continued to drudge old nightmares from freshman year of Highs school as I took a stroll down memory lane with distribute binomials, perfect square binomials, difference of square binomials, factor perfect square trinomials and factor difference of squares, F.O.I.L. and other Algebraic muses.

In addition, revisited conjugating verbs in Español and writing descriptions (en Español) for 9 family members Next Steps..
There are many places I still need to explore..

Below are some topics I am considering:

- Columnstore Indexes
— Best practices around SQL Server AlwaysOn (Snapshot Isolation/sizing of Tempdb, etc)

  • Getting Started with Kubernetes with an old buddy (Nigel)
  • Getting Started with Apache Kafka
  • Understanding Apache ZooKeeper and its use cases

I will give it some thought over the weekend and start fresh on Monday.
Stay safe and Be well

-MCS

Originally published at http://sqlsquirrels.com on April 20, 2020.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mark Shay

Mark Shay

3 Followers

A Passionate Technologist. Blogging about my journey in learning exciting technologies