core
is my personal repository. Most of the code is related to providing data for a
quantified self dashboard on Klipfolio. Data is
ETL'd and sent to a
PostgreSQL database hosted on
Google Cloud SQL. Hibernate is used as the ORM and schema
generator. Everything is scheduled with Quartz.
- Anki local SQLite database
- Calibre local SQLite database
- Fitbit API
- Goodreads API
- Google Analytics API
- Google Fit API
- Google Sheets API
- Habitica API
- HERE API
- Human API
- Indie Hackers scraping
- Kiva API
- Last.fm API
- LeetCode scraping
- LIFX API
- RescueTime API
- RottenTomatoes scraping
- Toodledo API & scraping
- Trello API
- WakaTime API
- Wikipedia: Wikimedia API, DBpedia scraping, MediaWiki API
- Install the gcloud sdk.
- Run
gcloud init
, enter your credentials into browser. - When prompted, select project
z1lc-qs
/arctic-rite-143002
.
- Run
- Set the environment variable
GOOGLE_APPLICATION_CREDENTIALS
to point toz1lc-qs.json
. More info here. - Install Anki, ideally a version ≥2.1.
- Log into Anki and sync.
- Install the AnkiConnect add-on.
- To avoid having passwords and API keys stored alongside code in Git, this project uses a file called
secrets.json
which provides secrets to the application at runtime. Ensure you've provided a valid mapping for eachcom.robertsanek.util.SecretType
within thesecrets.json
, and that it is located in the root directory. You can find out where this directory is for your platform by callingcom.robertsanek.util.platform.CrossPlatformUtils::getRootPathIncludingTrailingSlash
. You can refer to thesecrets.template.json
file for an example of what the realsecrets.json
should look like. - If you plan on running the
ETL
command, ensure you've run theETL_SETUP
command once beforehand.
Pass a command-line argument to select one of the below (documented in Main.java
).
ETL
will run all ETLs and then runDQ
.DQ
will run data quality checks.HABITICA
will generate an html document with a summary of Habitica dailies.PASSIVE_KIVA
will generate an html document with short-duration Kiva loans from highly-rated field partners.WIKI
will extract basic information about popular Wikipedia articles that refer to people, outputting a csv file and images to import into Anki.ETL_SETUP
needs to be triggered before ETLs are run. Idempotent (no downside to re-running).DAEMON
will run some combination of the above commands on a specified schedule. SeeMain.java
for the exact scheduling.
Example: java -jar target/core-1.0-SNAPSHOT.jar -command etl_setup -type manual