Running Truffle in a Docker container
This is a short explanation on how to setup a Truffle decentralized app using Docker containers.
This is a short explanation on how to setup a Truffle decentralized app using Docker containers.
Last time I started to experiment with Hadoop and simple scripts using MapReduce and Pig on a Cloudera Docker container. Now lets start playing with Spark, since this is the goto language for machine learning on Hadoop.
This post describes my first experiment with the Cloudera environment by trying to use the basic MapReduce method on a simple dataset.
Using the Docker HDP image from Hortonworks, it is easy to spin up an Hadoop environment onto your machine.
This post is a short explanation on how to get up and running with Docker and Anaconda.
This notebook makes use of the Scrapy library to scrape data from a website. Following the basic example, we create a QuotesSpider and call the CrawlerProcess with this spider to retrieve quotes from http://quotes.toscrape.com.
df = pd.concat([df.drop(['meta'], axis=1), df['meta'].apply(pd.Series)], axis=1)
In this project I am experimenting with sending data between Javascript and Python using the web framework Flask. Additionally I will use matplotlib to generate a dynamic graph based on the provided user input data.
A simple instruction to create an API for an existing MongoDB database using Eve.
Recently I was playing with some code that generated big dictionaries and had to manipulate these dictonaries several times. I used to save them via Pythons pandas to CSV and load them back from the CSV the next time I was using my script. Luckily I found out an easier way to deal with saving and loading variables, namely by using Pickle.
import pickle
pickle.dump(variable, open(picklename, 'wb'))
pickle.load( open( picklename, "rb" ) )