Luka Harambasic | Posts https://harambasic.de My private playground, publishing my thoughts and ideas. Showing of what I did and playing around with new technologies. In this feed you will stay up to date with my posts. Thu, 19 Feb 2026 09:54:24 GMT https://harambasic.de/posts/almost-free-setup-for-ngos-startups-and-side-projects Almost free setup for NGOs, startups and side projects https://harambasic.de/posts/almost-free-setup-for-ngos-startups-and-side-projects I want to inspire you how you can set up a professional working environment for your next idea with almost no money. Wed, 09 Dec 2020 00:00:00 GMT What's this post about?

I want to inspire you how you can set up a professional working environment for your next idea with almost no money. To give you a concrete example: I'll talk over our set up at Active Ambassadors. All my knowledge, which I have acquired over the last years, has flown into this.

This post targets someone who has no idea what he/she is doing but also should inspire people with similar setups. Let me know if you have ideas how I can improve this setup even more (E-Mail, Twitter). 🙂

Before we start you should have this points in mind while reading this post:

  • We don't want to spend money on our IT setup, our goal is to raise awareness for NGOs.
  • We work 100% remotely, some of us haven't even met!
  • This post isn't only there to list the tools, it's about the stuff we are really using.
  • I would follow the same setup the same for a (digital) company, club or side project.

What's Active Ambassadors?

Active Ambassadors is an organization founded by Leonard Schwier and myself. The idea was to raise awareness for NGOs with the skills we have developed over the last years. For that, we'll send our active ambassadors DIY kits to iron a logo of a NGO of their choice on a jersey. To accomplish this, our team grew as Julia & Julia joined our journey ❤️

We don't make a profit. Currently we pay most of it ourselves. You can see all expenses and income on our transparency page.

Feel free to send us an email or a message on Instagram if you have any questions!

The tools we use

Flow of all the tools we use

As I'm a big fan of structured content I split our tools into four categories: (1) Communication, (2) Operations, (3) Marketing and (4) Website. I'll shortly go over them and explain how we use them. We are using all of these tools in the free tier, we only have to pay for the domain & emails. If you are a registered NGO you even can use some premium tiers for free, e.g. on Slack or Google.

Communication

Tools: Slack, Google Meet

Our main communication tool is Slack, since we are a rather young team and grew up with messaging solutions no one had a problem to adapt it. I'm personally also a big fan of separating communications, e.g. I don't want to have discussions about Active Ambassadors mixed up with my private chats in WhatsApp.

As we use Google Calendar for invitations Google Meet was a natural fit for our calls.

Slack Overview

As you can see we follow some simple naming conventions, to be honest they are a little overkill, but I'm used to it and I like conventions. These are inspired directly by Slack and I used them in several teams. They just work even if everybody needs some time to get used to it.

  • a-announcements: Announcement channels are used to share some information with the whole workspace, e.g. about the christmas party or an all-hands.
  • b-bots: These channels are primarily used by bots which only share updates, e.g. a new comment on instagram, a new order or a closed deal in a CRM.
  • p-projects: These are not only for projects as they are defined, they also can be use as cross-team communication channels, e.g. for an event organization where multiple teams are needed.
  • r-random: Just stuff that doesn't fit in any other channel, e.g. you can create one for a running dinner where everybody can share there meals without spamming other channels.

In other workspaces I also used t-teams and h-help but for us this isn't currently needed as we are a small team.

Operations

Tools: Airtable, IFTTT

We moved from Google Sheets to AirTable because they offer an API which can be consumed by our website to display expenses and incomes. We have two major categories with five sheets:

  • CRM (definition)
    • Active Ambassadors
    • Organisations
  • Finance
    • Expenses
    • Income
    • Cost per DIY kit

Our list of Active Ambassadors is somehow a mixture between a CRM and an order system, if a user fills out the form a new entry will be added with all necessary information. The ambassador will also get a confirmation via Mailchimp. The new order is also communicated via Slack, this automation is done via IFTTT. That message in Slack triggers Leonard to start the printing and shipping process. Afterward, the status of the order is reflected manually in the sheet.

Airtable - Shipping

The organization's CRM is inspired by a standard sales funnel. In reality, it's just a Kanban board with four different states: First contact, Negotiation, Won and Lost. This helps us to keep track of organizations we already have contacted and creates transparency within the team.

Airtable - Organisations CRM

I think there isn't much to say about our finance sheets, nothing fancy, pretty simple, and straight forward. This data is reflected on our transparency page.

Airtable - Finances

If Notions provides us with the same functionalities to connect other tools I would like to drop AirTable to reduce the number of tools we use.

Marketing

Tools: Mailchimp, Later, UNUM

In the beginning, we had a newsletter with Mailchimp, but now all our communication efforts go into Instagram. To schedule and plan our posts Julia is using Later and UNUM and that's not something I'm familiar with, so if you want I could ask Julia for a guest contribution.

Knowledge/Tasks

Tools: Notion, Google Drive

Over the last few years, Notion has satisfied the desire of productivity junkies who want to create tools for their own needs by not writing a single line of code. It slowly spilled over into the business world, at least it looked like that to me. You can do almost everything: use it as a wiki, build a custom CRM, track your tasks, or use it as a notebook.

Notion

We decided to use it for our tasks with a simple Kanban board as well for some knowledge sharing. It's still work in progress, but it's growing.

Google Drive only exists as data grave, we only save documents (e.g. templates) or pictures, but our knowledge is completely on Notion.

Website

Tools: Nuxt.js, Netlify, Prismic, GitHub, Netcup, Figma

The next paragraphs will be very technical, if you aren't interested in this I still would recommend getting your hands on a domain, even if you aren't hosting a website. It's just ten times more professional if you send an email from luka@active-ambassadors.org instead of dungeonmaster99@gmail.com.

I built the website with Nuxt.js, a framework on top of Vue.js. So this is only something for you if you know how to do web development or if you know someone who is able to do it. The website is publicly accessible via GitHub the version control system of my choice. This is linked to Netlify where each change is directly deployed without any manual download or build process involved. During the automatic build process, the required data from Prismic our headless CMS, and AirTable are requested. Also a change to the content in Prismic would trigger a new build. I would like to have something similar for AirTable as all the numbers on our transparency page are consumed from our sheets. Because this isn't possible I trigger a new build every 24 hours via IFTTT as we don't rely on instant updates.

The domain is managed by Netcup which costs us 17.55 € per year, which also contains the email hosting.

For creating mockups, e.g. to discuss version two of our website, I used Figma as everybody can access the files in realtime and edit it or leave a comment. I also used Figma to create our logo.

Most of this could be replaced by Wordpress and a webspace. An example for such a setup can be seen on my mothers organization: Respekt Menschen (respect humans). I just prefer to have full control over the performance and the layout while going for a best of breed approach. Each part of this setup can easily be replaced, e.g. Prismic with nuxt/content or Strapi (self-hosted).

Reflection

We use a lot of SaaS as it is easy to setup and maintain, but I personally think a complete open source approach would better fit our philosophy. And for now I think that's not realistic as it would increase dramatically the maintenance effort also the costs, e.g. we would have to host servers for alternative solutions. To be honest, the maintenance is the main argument as all of this happens in my free time. If I had the time I would like to drop IFTTT and write my own bots and deploy them on our own server, the problem is just that the outcome would be the same (+ additional effort) without adding value to the NGOs we want to support.

The other question I'm not completely sure about is if all these tools are necessary? You definitely could run the same organization with phone calls and a notebook, but we like the digital life style and I'm not sure if we could drop one of the tools while staying on the same productivity level.

Key takeaways

  1. The free tier of many tools is completely sufficient.
  2. Get a fucking domain it doesn't have to cost 17.55 € a year - there are even cheaper ones, but please get a domain and use it for your emails. I'll help you for free if you need support!
  3. Don't implicitly trust lists of tools, decide for yourself what you and your team needs.
]]>
https://harambasic.de/posts/add-github-actions-for-testing-linting-to-your-repository Add GitHub Actions for testing & linting to your repository https://harambasic.de/posts/add-github-actions-for-testing-linting-to-your-repository GitHub Actions are easy to configure and should be used in all npm/yarn based projects. Mon, 21 Dec 2020 00:00:00 GMT Why should you add this GitHub Actions to your repositories?

GitHub Actions are there to automate workflows directly in GitHub without the need of setting up a full-blown CI/CD pipeline. You can just use them by adding a file to your root directory. Also, the pricing is very accommodating. I think I won't run into the limits with my private projects especially as there aren't limitations for public repositories.

Passing GitHub Action Workflow with a successful build on a PR

During the time I worked on the german Corona-Warn-App I noticed how powerful a CI/CD pipeline is. Therefore, it was clear to me that I want such a safety net for myself. If I'm rushing something or think it's just a quick fix I would love to see this in the PR and not in production. With these two small checks executed for every PR I'll spot errors more easily. It also opens up the possibilities for collaborations, as everybody has to fulfill the same checks.

Adding GitHub Actions

Prerequisites in package.json

  1. Check the name of the scripts you want to be executed. In this example, I want to run test & lint.
{
...
"scripts": {
    "serve": "vue-cli-service serve",
    "build": "vue-cli-service build",
    "test": "vue-cli-service test:unit",
    "lint": "vue-cli-service lint"
  }
...
}
  1. (Optional) Let build fail if test or lint don't succeed, for that you need to chain the checks with & and the final build step with &&
"build": "npm run lint & npm run test && vue-cli-service build",

How does the build fails and why?

"Because npm scripts are spawning a shell process under the hood [...]" (Corgibytes, Kamil Ogórek) we can use the bash syntax to run our scripts with some additional logic. In this case we are using & to run commands in parallel and && to run commands sequentially (more details).

This means if npm run build is executed by your build pipeline (e.g. Netlify or Vercel) it runs the first two commands in parallel and only if they succeed the build will be started.

It's redundant as the GitHub Actions run in parallel to the build task, and the results aren't used for the build. This means the checks happens once on the GitHub Action and a second time on the build. However, since I don't want a build with lint errors or failing tests I'm okay with this redundancy. It looks like I'm not the only one with this requirement (here & here).

Implementation

GitHub Action - Pull request with a failed test job which lead to a failing build

Just add the following file to your root directory in .github/workflows/. The name attributes are visible in the PR (see image). You only have to change the last line per job if you want to run something else, e.g. Cypress. So it's quite easy to adapt it to every other project even if my examples are based on Vue.js and Nuxt.js:

name: Checks
on: push
jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: npm install
      - run: npm run lint
  test:
    name: Test (jest)
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: npm install
      - run: npm run test

My initial commit with these actions was rejected from GitHub as my personal access token in Webstorm didn't have the permissions to push workflows. So you may need to create a new token for your GitHub client.

Conclusion

I started adding the above workflow to jura.education (PR) and with this post the lint workflow will be added to this website. From here I will add the workflows piece by piece to all my repositories.

I think it's a super simple way to enhance your code quality for no cost. So let's give it a try, even for small personal projects.

]]>
https://harambasic.de/posts/docker-compose-for-nextcloud-with-traefik-2-ssl Docker Compose for NextCloud with Traefik 2 (SSL) https://harambasic.de/posts/docker-compose-for-nextcloud-with-traefik-2-ssl I faced some problems to set up NextCloud with Traefik and that's why I share my docker-compose.yml. Wed, 13 Jan 2021 00:00:00 GMT Intro

I set up Traefik 2 on a VServer at Netcup mainly to use Nextcloud. Since I am neither Docker nor Traefik or NextCloud expert it took some time to set up everything as most of the docker-compose.yml files I found weren't working. So here is my short story about setting up NextCloud.

Complete docker-compose.yml

version: '3.7'

services:

  db:
    image: mariadb:latest
    container_name: nextcloud_db
    volumes:
      - nextcloud-db:/var/lib/mysql
    networks:
      - default
    restart: always
    environment:
      TZ: UTC
      MYSQL_ROOT_PASSWORD: SUPER_SECRET
      MYSQL_DATABASE: db
      MYSQL_USER: admin
      MYSQL_PASSWORD: SUPER_SUPER_SECRET

  redis:
    image: redis:latest
    container_name: nextcloud_redis
    restart: always
    networks:
      - default
    volumes:
      - nextcloud-redis:/var/lib/redis

  nextcloud:
    depends_on:
      - redis
      - db
    image: nextcloud:stable
    container_name: nextcloud
    volumes:
      - nextcloud-data:/var/www/html
    networks:
      - web
      - default
    restart: always
    labels:
      - traefik.http.routers.nextcloud.middlewares=nextcloud,nextcloud_redirect
      - traefik.http.routers.nextcloud.tls=true
      - traefik.http.routers.nextcloud.tls.certresolver=lets-encrypt
      - traefik.http.routers.nextcloud.rule=Host(`cloud.YOUR-DOMAIN.com`)
      - traefik.http.middlewares.nextcloud.headers.customFrameOptionsValue=ALLOW-FROM https://YOUR-DOMAIN.com
      - traefik.http.middlewares.nextcloud.headers.contentSecurityPolicy=frame-ancestors 'self' YOUR-DOMAIN.com *.YOUR-DOMAIN.com
      - traefik.http.middlewares.nextcloud.headers.stsSeconds=155520011
      - traefik.http.middlewares.nextcloud.headers.stsIncludeSubdomains=true
      - traefik.http.middlewares.nextcloud.headers.stsPreload=true
      - traefik.http.middlewares.nextcloud.headers.customresponseheaders.X-Frame-Options=SAMEORIGIN
      - traefik.http.middlewares.nextcloud_redirect.redirectregex.permanent=true
      - traefik.http.middlewares.nextcloud_redirect.redirectregex.regex=https://(.*)/.well-known/(card|cal)dav
      - traefik.http.middlewares.nextcloud_redirect.redirectregex.replacement=https://$${1}/remote.php/dav/
    environment:
      REDIS_HOST: redis
      MYSQL_HOST: db:3306
      MYSQL_DATABASE: db
      MYSQL_USER: admin
      MYSQL_PASSWORD: SUPER_SUPER_SECRET
      TRUSTED_PROXIES: 172.18.0.1

networks:
  web:
    external: true

volumes:
  nextcloud-data:
  nextcloud-db:
  nextcloud-redis:

Test your set up and security

After you fired up your Nextcloud you should check if everything is working as expected. NextCloud offers two ways to help you with that:

Nextcloud administration overview

Usage of calDav and cardDav

It's quite easy if you use the docker-compose.yml above. You need your domain, your user and as it is recommended an app password (Settings > Security > "Create new app password"). With these credentials you can go to every client which supports calDav/cardDav. In the screenshot below you can see a calDav set up in the iOS settings.

  • Server: cloud.YOUR-DOMAIN.com
  • User: user
  • Password: app password

iPhone settings calDav example

I spent some hours in setting up all of these, here is a list with all the links I used. The DigitalOcean Tutorials are just awesome and as far as I can tell are always up to date. I only would start Traefik as a docker-compose.yml to be consistent.

]]>
https://harambasic.de/posts/my-perfect-homeoffice-conference-call-podcasting-setup My perfect Home-Office Conference Call & Podcasting Setup https://harambasic.de/posts/my-perfect-homeoffice-conference-call-podcasting-setup Overview for my perfect Homeoffice Conference Call & Podcasting Setup. Sun, 18 Apr 2021 00:00:00 GMT Intro

A friend had the idea for our podcast Techmob Show (german). I got one requirement before we would start the first recording: get a new microphone. During this time a colleague also stepped up his camera game which got me thinking. And so one thing led to another...

Desk with camera, ring light, display, microphone with an arm, stream deck and MacBook.

A lot of the following can be achieved a lot cheaper. But in this kind of area, I prefer plug-and-play solutions. Besides, that I had one hard requirement: USB-C. The hardware I bought should last some years, and I don't want to be annoyed in two years that I have to use a USB-A to USB-C adapter.

Laptops, Dock & Monitor

My setup has to work with my private MacBook 15" (Mid 2014) and my business MacBook 16" (2019). The ports are the major difference, basically from everything you need to USB-C. As I spend most of my time working at this desk I focused on my business MacBook which is connected to the Belkin Thunderbolt 3 Dock Pro Hub via one cable, which is quite convenient compared with the previous dongle setup. Mouse & Keyboard connect in both cases via Bluetooth. The 2019 MacBook connects to the Samsung display (Samsung SE790C Display) over the dock via DisplayPort, which leaves HDMI for the 2014 MacBook. That's my first widescreen display and I bought it used from a friend. If you compare it with Retina Displays you see a difference, especially at text. Staring at this display by consuming or producing text I really would like to step up this part of my setup. Overall a widescreen monitor is just nice and comes in very handy for Pull Request reviews.

Streamdeck

Elgato Streamdeck (150 €)

Before I even bought the microphone I wanted to play with an Elgato Streamdeck to speed up some repetitive tasks. Alternatives to this would be an external number pad or the FreeDeck (Video). A friend of mine already used a Streamdeck, mainly for ABAP programming. He mentioned that it's only a gadget, though a nice one. After two months of use I have to admit he was right, but I use, it multiple times daily. Some of my most used actions:

  • Start/Stop Spotify: If I get called via Microsoft Teams and press Play/Stop on my keyboard it pauses the ringtone but not Spotify. With a Plugin I stop Spotify directly without going over the system audio. A very small convenience, but I would not want to miss it anymore.
  • Open Jira Filters and Boards: A lot of my work happens in Jira where I have some Boards and Filters I visit multiple times a day.
  • Control my Elgato Ring Light: I sometimes change the brightness or Kelvin depending on the sunlight, but most of the time I want my default preset.
  • Shortcuts for my IDEs (Android Studio, Webstorm): There are just some shortcuts I can't remember, especially around Git (Git with UI tools <3). I created custom shortcuts to not interfere with existing ones and bound them to the Streamdeck, e.g. Fetch, Pull, Clean, Build.
  • Run Commands: Mainly used while testing and reviewing some Android PRs, e.g. open language or time settings.

All of this is still pretty basic and I would like to do even more with it.

Microphone

Rode NT-USB Mini (100 €)

One of the goals was to not spend a fortune but still have a significantly better quality than my AirPod Pros. I also wanted a stand as I wasn't certain if I want to spend extra 20 € on an arm, and I can tell you: get an arm for your microphone. Otherwise, it's quite annoying as it's always in the way, and you are too far away to get a nice audio quality. It's still nice to have a microphone with an integrated stand as I can change locations, which is relevant for our podcast recordings. With all that in mind, I went for the Rode NT-USB Mini and until now, I only got compliments about the sound quality. To be honest, 50% of the sound is the microphone arm, it just sounds more professional if you see this in my webcam image. If you don't need a built-in stand you should check out the Rode Podcaster. As an alternative, I would go for some wired headphones with a microphone.

Light

Elgato Ring Light (200 €)

I was quickly certain that I want a Ring Light as I don't want to do a full-blown light setup, and I like the look of a Ring Light. I may also have been influenced by the video "Elgato's new Ringlight. But for whom?" (german). Most cheap Ring Lights come with a Tripod and in my current setup, this doesn't make any sense. I'm also thinking about getting a height-adjustable table, so I only wanted to go with a table clamp. For me, someone who knows nothing about light, I was struggling to pay so much money for something banal like light. I went for this light because it was convenient, and I was pretty confident the quality will be right. As an alternative, you can use a reading lamp or turn your desk to your window so that you benefit from natural light. Or maybe the Razer Ring Light is something for you with 90 €.

Camera

Logitech Streamcam (160 €)

Like most people in Home-Office I used my built-in laptop camera and to honest the first half of the year I never turned it on. In the meantime, my opinion changed and I think everyone should turn on their cameras. It still doesn't replace a real meeting, but it brings you a little closer. Most of the time the built-in camera is very unfavorable as it records your nostrils has a poor low light performance and a bad resolution.

My first idea was to use my smartphone, as it can record in 4K, and I always have it with me, and thanks to the Elgato EpocCam Pro App it's not even a problem. There is one thing I mentioned, in the beginning, I want a plug-and-play solution. Yes, this setup is straightforward, but I have to set it up multiple times a day, every day. That's not what I imagine, and I hoped to solve it with my next idea: using my GoPro Hero 8 Black. Also, this works okay, but the software to use your GoPro as a Webcam is very buggy, every morning I had to do some of the following stuff (in random order): restart GoPro, unplug the cable, restart the program, unplug & shutdown GoPro, start the program without the GoPro being connected. The image quality was good, and I gained the freedom to move my laptop wherever I want. Before this, it was positioned under my widescreen monitor, now he is to the right of my display.

During this time the previously mentioned colleague bought a used camera where you can apply different lenses. I have to admit his image quality is awesome, with a natural Bokeh effect. To achieve this, he had to hack his camera which runs Android 2 and that was something I didn't want to do.

With this, I had two options left: 1) a new SLR camera which can be easily used as a webcam 2) a top webcam. Option one starts at 500 € without a lens and option two ends at around 250 €. Having in mind that I use it mainly for video calls the decision was easy, but then I had to decide on a webcam. Three have been shortlisted: Logitech Streamcam (160 €), Brio Ultra-HD Pro Business-Webcam (240 €), Razer Kiyo (110 €).

I watched some comparison videos to get an understanding of the differences and to find the image I like the most and is worth the money. The winner is the Logitech Streamcam: USB-C cable (unfortunately firmly attached), mid-range price (got it for 120 €), 1/4" mounting thread.

Most 25 € Full-HD webcams would almost look the same on a small square in your favorite video call software, e.g. this one.

Sampels

Conclusion

I had a lot of fun dealing with the topic so it was totally worth it. For now, I have the perfect setup for my requirements. All of this could be a lot cheaper, but this would require more tinkering and is probably not as convenient as this setup. Having a setup like this is for me mainly about fun. I'm still not a streamer or content creator who needs this. No matter how much you want to spend there are two small points you can always follow:

  • Always use a headset: It can be quite annoying if your counterparts hear themselves in your microphone. Think about a long conference call...
  • Turn on your camera: Not only in these remote times it is nice to see your counterpart. Be the counterpart you would also like to have. If you want to get a more expensive inspiration I recommend the video "Next Level Video Calls | Best Homeoffice Setup Money Can Buy" from Christoph Magnussen.

I only linked to the manufactures, most of the time it's more expensive than other retailers. So please check the prices :)

Thanks to all my colleagues and friends who had to endure me during the trial and error phase.

]]>
https://harambasic.de/posts/build-a-custom-search-engine-in-your-browser Build a custom search engine in your browser https://harambasic.de/posts/build-a-custom-search-engine-in-your-browser I show you why custom search engines in your browser are a nice tool and how I use them. Sun, 20 Jun 2021 00:00:00 GMT Intro

A friend - Sebastian Furkert - showed me this little trick during our studies. With this, you can build your own search engine. Okay, that might sound a little over the top, but basically, that's what you're doing. If you often search on the same websites this can save you a lot of time.

From my point of view, there are two use cases: search and find. I would define searching that you are open for the outcome and don't know what to expect. You want to find something if you are looking for something specific, something expected, something with an identifier.

There are many other ways how you can achieve this, but if you are signed-in in Chrome your search engines will be synced across multiple devices. You could use Keyboard Maestro (macOS) or Auto Hotkey (Window) to trigger something like this from everywhere.

Examples

GIF: Use custom search engine in Google Chrome

A selection of websites where you search. I use some of them almost daily and others are here to show you what's possible.

Find

If you have a unique identifier you can find it. I don't even want to know how much time the Jira shortcut has saved me in the last year.

*I use this multiple times a week.

How to add a custom search engines

GIF: Add a new custom search engine to Google Chrome

  1. Right-click in the URL bar (not sure how to name the thing where you put the URL)
  2. Click on Manage search engines...
  3. Click on Add
    1. Set a Search engine title, e.g. GitHub
    2. Set a Keyword to trigger your search engine, e.g. gh
    3. Set a URL with %s in place of query and replace the search term by %s
  4. That's it!

You can use every website which reflects search results or identifiers in the URL. There are examples where you can't use this as the search is only an overlay and won't manipulate the URL, e.g. https://vuejs.org/v2/guide/.

Firefox search engine shortcuts

You can use the same concept on Firefox it's just called Search Shortcuts.

]]>
https://harambasic.de/posts/automatically-generate-social-media-images-for-nuxtjs-with-a-git-precommit-hook Automatically generate social media images for Nuxt.js with a git pre-commit hook https://harambasic.de/posts/automatically-generate-social-media-images-for-nuxtjs-with-a-git-precommit-hook Generate social media images for a nuxt/content project in a git pre-commit hook, this logic can be adapted to any other CMS. Sun, 18 Jul 2021 00:00:00 GMT Intro

For this website, I want to generate images for social media automatically every time I publish something new. The image for this post looks like this:

Generated social media preview with the script in this post, it  shows the title of the post

There is a Nuxt.js specific article using Cloudinary by David Parks which is based on an article by Jason Lengstrof. I somehow like the idea, but for this website, I want as much control as possible and as few as possible external dependencies, especially on other servers.

With these requirements, I thought about the article by Flavio Copes which I used to generate Instagram posts for Techmob Show. You can even get the package on npm - @techmobshow/generate-podcast-cover. This uses node-canvas which is okay, but I wouldn't use it again after a friend - Timon Lukas - came up with the idea to use a browser automation tool. The problem with node-canvas are the dependencies, check out what you have to install. I had a conflict between a local and global version and I wasn't able to fix it. Also, I have to do the styling programmatically, and I even need to handle multi-line text, as it isn't working out of the box.

The final solution is based on Playwright which allows me to write and style a template.html, inject the title and take a screenshot. That's what I feel comfortable with and that's fun for me.

How to generate the images

Nuxt.js has a strongly opinionated directory structure which I like, but somehow it felt wrong to put this script which runs only locally in assets. For that, I created a scripts directory (who knows what will be automated next) where all my node scripts will live which aren't part of the website build. This is important as these scripts aren't handled by Nuxt.js which uses webpack with babel. They are executed through Node.js which means you can't use import X from 'x', but it also allows you to use fs, which means you can interact with the file system.

Let us come to the interesting part, for the sake of simplicity I'll put everything in one file and explain the automatic generation in detail. On my website I have three files to get this working:

  1. util.js - shared logic
  2. generateAutomatic.js - automatic image generation for new posts and lists
  3. generateManual.js - manual image generation by passing a title via the command line
const { readdirSync, readFileSync } = require('fs');
const { chromium } = require('playwright');
const path = require('path');

const ROOT_PATH = process.cwd();
const SOCIAL_PATH = `${ROOT_PATH}/static/social`;
const POSTS_PATH = `${ROOT_PATH}/content/posts`;
const LISTS_PATH = `${ROOT_PATH}/content/lists`;

/*
 * Opens an HTML template, sets the title, takes a screenshot and saves it locally as png
 * @param {Page} page
 * @param {string} title
 * @param {string} slug
 */
const generateImage = async (page, title, slug) => {
	const URL = `file:///${path.join(__dirname, '/template.html')}`;
	const SCREENSHOT_PATH = `${SOCIAL_PATH}/${slug}.png`;
	await page.goto(URL);
	// strange syntax, check https://playwright.dev/docs/api/class-page#page-eval-on-selector for more infos
	await page.$eval('.title', (el, title) => (el.textContent = title), title);
	const cardHandle = await page.$('.card');
	await cardHandle.screenshot({
		type: 'png',
		path: SCREENSHOT_PATH
	});
};

/*
 * Returns if there is an image for the given slug
 * @param {string} slug
 * @returns {boolean}
 */
const doesImageAlreadyExist = (slug) => {
	const files = readdirSync(SOCIAL_PATH);
	return files.find((file) => file.startsWith(slug));
};

/*
 * Posts and lists contain a title followed by a description in YAML
 * @nuxt/content isn't accessible so this has to be parsed manually
 * Returns the extracted title
 * @param {string} str
 * @returns {string}
 */
const getTitle = (str) => {
	const start = 'title: ';
	const end = '\ndescription: ';
	return str.substring(str.indexOf(start) + start.length, str.indexOf(end));
};

/*
 * Returns meta data to a given file
 * @params {string} name
 * @params {string} basePath
 * @returns {{name|string,path|string,slug|string}} // not sure how to do this object syntax without defining a type
 */
const fileToMeta = (name, basePath) => {
	return {
		name,
		path: `${basePath}/${name}`,
		slug: name.split('.')[0]
	};
};

/*
 * Instantiate a new browser with playwright, get all potential files (lists, posts)
 * and check if there is already a social media preview image in SOCIAL_PATH
 * if not execute generateImage(), nothing will be returned
 */
const generateSocialMediaPreview = async () => {
	console.info('>> GENERATE SOCIAL MEDIA PREVIEWS <<');
	console.info('🆕 newly generated, 🛑 already exists');
	console.info('-------------------------------------');
	const browser = await chromium.launch();
	const page = await browser.newPage();
	const posts = readdirSync(POSTS_PATH).map((name) => fileToMeta(name, POSTS_PATH));
	const lists = readdirSync(LISTS_PATH).map((name) => fileToMeta(name, LISTS_PATH));
	const files = [...posts, ...lists];
	for (const file of files) {
		const content = readFileSync(file.path, 'utf8');
		const title = getTitle(content);
		if (!doesImageAlreadyExist(file.slug)) {
			console.info('🆕', title);
			await generateImage(page, title, file.slug);
		} else {
			console.info('🛑', title);
		}
	}
	await browser.close();
};

/*
 * Entry point for generateSocialMediaPreview() when this file is executed
 */
(async () => {
	try {
		await generateSocialMediaPreview();
	} catch (error) {
		console.info('Error:', error);
	}
})();

I would say this code is quite self-explaining, but as I have spent some time with it, I might miss something. Feel free to get in touch on Twitter if you have any questions. However, I might not forget the appropriate HTML template.

<!DOCTYPE >
<html lang="en">
	<head>
		<title>Hello</title>
		<link rel="preconnect" href="https://fonts.googleapis.com" />
		<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
		<link
			href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@700&display=swap"
			rel="stylesheet"
		/>
		<style>
			* {
				box-sizing: border-box;
			}
			body {
				font-family:
					Open Sans,
					Helvetica Neue,
					Arial,
					sans-serif;
				color: #121218;
			}
			.card {
				width: 1200px;
				height: 630px;
				background: url('./template.svg') no-repeat;
				padding: 5rem;
			}
			.title {
				font-size: 5rem;
				font-weight: bold;
			}
		</style>
	</head>
	<body>
		<div class="card">
			<div class="title">Will be replaced! Wuhu! Party :)</div>
		</div>
	</body>
</html>

[template.html]

It's super simple, directly using Google Fonts with a svg graphic as background. Thereby only the text has to be styled.

Automation

As I'm forgetful I need to automate the generation. As titles of posts normally won't change this is a one-time job per post. Therefore, I won't want to generate this during the build step on Netlify. This step can happen locally, after a change, before I commit or push something. That's why I thought about a git pre-commit hook. Alternative a GitHub Action could be used which executes the same script and then commits the newly generated files. I stuck to the pre-commit hook for now, but with that, the odyssey began...

I looked in the native git pre-commit hook and I was able to execute the file, but somehow the image generation wasn't triggered. I had the same problem with pre-commit and husky. Nothing worked. So I wrote my first Stackoverflow question in a while and asked some friends, what they use and if they have an idea. The same friend who had the idea to use browser automation tools to generate the images also helped me here and found two tickets that Webstorm isn't working with git hooks (this and this). That explained why my script was never executed*. For now, I have to commit via command line and not via the Webstorm UI, but hey that is possible. During my research everybody recommended husky, so I tried it again and stuck to it. As the script was now executed the next problem occurred: newly generated files aren't committed. But Stackoverflow has already an answer for this. You need a pre-commit and a post-commit hook, read more about this in the linked question.

I only need to define a new script in my package.json. This will be executed in my post-commit hook and don't forget to also define a pre-commit hook.

{
  ...
  "scripts": {
    ...
    "socialMedia:auto": "node scripts/generate-social-media-preview/generateAutomatic.js",
    ...
  }
}

[package.json]

#!/bin/sh
. "$(dirname "$0")/_/husky.sh"

# https://stackoverflow.com/questions/3284292/can-a-git-hook-automatically-add-files-to-the-commit
touch .commit
exit

[.husky/pre-commit]

The new script is then called here npm run socialMedia:auto.

#!/bin/sh
. "$(dirname "$0")/_/husky.sh"

# https://stackoverflow.com/questions/3284292/can-a-git-hook-automatically-add-files-to-the-commit
if [ -e .commit ]
    then
    rm .commit
    npm run socialMedia:auto
    git add .
    git commit --amend -C HEAD --no-verify
fi
exit

[.husky/post-commit]

Manual

For some pages, which are unique like home or imprint I also need images, but automating these would be a little bit overkill. The corresponding pages don't even have the required meta data, they could, but currently, they don't have. For that, I wanted a way to generate social media previews manually by passing in a title.

{
  ...
  "scripts": {
    ...
    "socialMedia:manual": "node scripts/generate-social-media-preview/generateManual.js",
    ...
  }
}

[package.json]

You only need to add a script to your package.json and then parse the arguments to retrieve the title.

const title = process.argv[2].replace('--title=', '');

Finally, execute your new command with the according title. And yes somehow the -- is needed, I just accepted it and didn't ask why.

npm run socialMedia:manual -- --title="Your preferred title"

Conclusion

Webstorm does some strange things. I'm a big fan of the JetBrain IDEs but didn't consider that they would change the default behavior of git.* But in the end, it works quite well, I learned a lot and I'll consider husky also for some other projects. Playwright showed again why I fall in love with it and why I'll use it for every browser automation project I have. Finally, I also got an idea for another approach: GitHub Actions. But that's something for another post.

*currently it's working with commits via the UI

]]>
https://harambasic.de/posts/my-nerd-path-10-years-of-personal-development My nerd path - 10+ years of (personal) development https://harambasic.de/posts/my-nerd-path-10-years-of-personal-development This post shows how I acquired my technical knowledge and illustrates where my interests lie and how they have developed over almost 15 years. Wed, 22 Dec 2021 00:00:00 GMT Preamble

This post is a story about my life, a big part of mine. It shows how I acquired my technical knowledge and illustrates where my interests lie and how they have developed over almost 15 years. This story should serve as an inspiration to enable others to follow their passion. Because not every person doing something with technology has to follow a strict path - there are multiple ways to reach the goal. Furthermore, I want this post to look back on someday and remember how everything evolved since I was a young boy.

Tobias Schlottke hosting the alphalist.CTO inspired me to write this post. He always starts an episode with the question: "What's your nerd path?". He got me thinking about how I became who I am in technology.

The story started when I was around 11, and I had no clue about computers. Continuing with a self-motivated learning phase, I became competent in web technologies and ended my last job as a technology consultant. Which also marks the end of this chapter in my life.

Getting to know the internet

When I was 11, there was a platform called Knuddels; nowadays, I would describe it as a predecessor of Slack with mini-games. If you were an active user, you got promoted as a Family Member. You'd get new features, of which I remember the homepage the most fondly because it somehow kicked off my Nerd. On my "homepage," you could learn more about my interests and hobbies. My initial "homepage" was visually very dull. I just had some colorful text without nice graphics. By contrast, other users had nice borders around each box and animated pictures. So I got curious and found some templates I could copy over that weren't even HTML, they were written in BBCode. I'm not sure when I have seen it the last time, maybe in a forum around that time? Just having something nice looking wasn't enough though, I wanted to make it my own. After a while, I recognized how powerful the style attribute is and could change the look of my whole homepage. And that's what I did.

Afterward, I started a browser game with a clan with a styleable information page, again with BBCodes. This was the first time I used GIMP, and that was the first time I became aware of what Open Source is.

In the end, simple websites started my journey, and as I write this, I also understand why I'm so addicted to the web. It's been with me since I was a young boy dipping my towns in the vast ocean that is the internet, and it's always been accessible to me. Thanks to my parents ❤️.

Using the website builder

Website with an drop shadow and border radius as image

The next step was to publish my first website. To be honest, I can't even remember the content. Nevertheless, I won't forget that it was at Homepage Baukasten (Homepage Builder), which gave you a pretty lovely domain: .de.tl. And, as before, I stuck to this place because there was a very active community, but this time with "experts" in programming. I remember how I stood in awe of everybody who could build a website from scratch on this platform. It felt like magic. I still have this feeling today for stuff I don't understand. After customizing some designs from the community, I created my own layouts. If you wanted to create a nice background for your content, as you see above, you had to create an image and then split it into three parts putting it together. It wasn't as convenient as it is nowadays.

<div id="container">
	<div id="top"></div>
	<div id="middle">
		<h1>About Us</h1>
		<p>Lorem Ipsum...</p>
	</div>
	<div id="bottom"></div>
</div>
#top {
	background: url('/container-top.png') no-repeat;
}
#middle {
	background: url('/container-middle.png') repeat-y;
}
#bottom {
	background: url('/container-bottom.png') no-repeat;
}

Another recognizable feature of websites at this time was this glossy, well-known Web 2.0 button. I used masks in GIMP the first time to achieve this. It was quite some fun to rebuild this in Figma, the third generation of graphic tools I use to create websites.

Glossy Web 2.0 button with the text: Click Me

Hosting my own websites

The next logical step was to build an entire website from scratch. For that, I needed some hosting, and everybody around me used bplaced.net.net as they had a decent free tier. But, with this, I still didn't know how to handle a web space. I remember when a strange person I got to know over the Homepage-Baukasten forum showed me how to use my webspace via Skype & TeamViewer. He also showed me FileZilla to upload my files. Quickly, I got annoyed by this manual step. Naturally, I wanted to speed up my process, so I installed a plugin in Notepad++ to directly edit files on the server. I didn't know that you shouldn't change something directly in production without testing, but it was convenient.

At this time I started to look more into PHP, as I was lazy! I just didn't wanted to update my header and footer for every page when I change something, e.g. add a new link to the menu. "Don't repeat yourself" (DRY) wasn't something I was aware of, but it was already part fo my philosophy.

I started to look more into PHP at this time, as I was lazy! I just didn't want to update my header and footer for every page when I change something, e.g., add a new link to the menu. "Don't repeat yourself" (DRY) wasn't something I was aware of, but it was already part of my philosophy.

<?php include ("./header.php"); ?>
<body></body>
<?php include ("./footer.php"); ?>

As I already knew so much about HTML & CSS, I was very disappointed by my IT teacher, who asked us to learn layouts with tables. Although every good web developer used float: left. He was so kind enough to let me plan and conduct some lectures in my advanced IT course at school. I even wrote a small PHP script, making it easy to share all my codings with the others as I wasn't aware of other solutions. Sadly we had to use table layouts for the final exam.

Getting more advanced

During this time, I started using WordPress and looked into a mystical new world: JavaScript. If I remember correctly, my first blog with WordPress was a relaunch of bagarts.de where I could write about online marketing. During this time, I thought marketing might be my passion. How wrong I was... And how could I believe that someone would care about the two cents from a 16-year-old boy without any substantial knowledge in this area? At this point, I got to know Netcup.net, the only web hoster I have had since then. I tried other hosters for other organizations, and I don't buy many domains at Netcup, but the hosting is still perfect, and the support is incredible.

Other solutions like the German homepage builder Jimdo were also on my radar. I created a simple website for my Handball team and my parents' construction company. After only 1 year, I moved to a self-implementation (see archive.org). I used Koala (a GUI tool) for Sass. That was it, no framework or anything like that included, just PHP, HTML & CSS (ok and a third party lightbox).

An old version of IVO-BAU.de - showing the title and a picture of the office

My first customer was my sports club. I developed a multi-WordPress website with a custom theme. I'm still quite happy with how it looks. Of course, it could use some optimization, but it's still alive. Despite some users trying to break it with an entire blog post in red and Comic Sans. During this, I fell in love with this concept of CPTs (Custom Post Types) and subsequently added some: teams, department management, and events. Then enhanced all of it with ACF (Advanced Custom Fields), e.g., to link to the current table for each team. I learned a lot and would do it entirely differently nowadays. Like on my previous homepages, I was in charge of the design, assets, hosting, and development.

Screenshot of handball.tv-edigheim.de showing the start page

Learn programming

In 2016 I started a corporate study program for a bachelor in Business Informatics focusing on Sales & Consulting. I worked in a three-month cycle at SAP (my partner company) and then studied for three months at DHBW (my university) for three years. At my company, I learned a JavaScript framework called SAP UI5, which is not my favorite one.

I had the opportunity to work in one of the most impressive teams I have ever been in. Everyone was fantastic, from the senior developer, designers, and user assistance. This team dynamic was extraordinarily unique, and I learned a lot. This atmosphere, where everyone is respected, constructive criticism is appreciated, problem-solving and free time are done together, continues to be what I strive towards in my choice of teams.

Even though work taught me a lot, I still learned most of my skills in my spare time on projects with friends. Fortunately, one of them showed me Vue.js. My new love, still today! It feels right to write everything in the original intended language: HTML. HTML with a very intuitive syntax for templating, CSS is CSS or SASS/SCSS when you prefer it, and JavaScript is JavaScript. It makes sense if you are used to the fundamentals of these three languages.

During this time, I got to know many other remarkable technologies and services: NodeJs, Python, Nuxt, Headless CMS, Netlify, Vercel, Serverless, Docker, and Firebase. I didn't have any lecture on any of these topics - I learned all of these on my own. If you wanted to give my approach a name, I would go for "problem-based-learner." If I want to achieve something and don't know how to do it, I'll learn the skill. Which is somehow lovely but sad as my journeys end most of the time end at the surface of new technology. It also makes it hard to dive deeper into new technologies as I'm bored by the first chapter of Udemy courses, and then it's hard to keep the motivation up.

I still wouldn't call myself an expert, but I can build a full-stack web application on my own. I always say it's enough for a prototype or an MVP. After that phase, experts should take over. Nevertheless, at this time, it looked that I could become a full-time developer, as I fulfilled all the requirements: I started a lot of side projects that I never finished.

Besides all these technologies, I started to follow my passion for UX/UI design. I went into design departments, and in all the projects I did, I was responsible for the logo, colors, typographies, and mockups. You can check out my first UI design, schmackofatz. On the projects page, if you want to see my skills during this time in my life. It was my first complete UI design, and even though I would do it quite differently today, I'm still proud of the outcome. Later I had to implement the UI for this idea in Vue.js.

On top of that, I could live out my love for tools. Digital tools. I changed the whole infrastructure of the oldest finance student club in Germany. I was responsible for setting everything up for a newly founded club and fixing all the wrong decisions I took two years later. My favorite failure is the email naming convention. We wanted to look like a startup and decided to only use first names for the email addresses, e.g., luka@q-summit.com. Though I didn't think that another Luka could join this club.

On my own, I tried a lot of tools in different areas: to-do lists, note-taking, PDF annotation, data storage, calendar, email. I'm sure I've annoyed most people around me, not just once with a great new tool, not just once. Thank you for still being friends with me. By the way, an Airtable with all the tools I found and used is currently in the making (and will be published soon).

In the context of this chapter, I really need to say thanks to two people: Frank & Henry! You know why. ❤️

Becoming a Technology Consultant

Shortly before my vocational contract with SAP ended, I found an excellent department where I became a Technology Consultant for Mobile UX (EMEA). In the beginning, I worked with the SAP Mobile Development Kit focusing on the SAP Asset Manager for iPads. Shortly after that, I had familiarized myself with the subject Covid-19 started, and the German government commissioned an app: The Corona-Warn-App. I started as an Android developer before I became the team lead. In this way, I found out that I like to take on responsibility, make decisions, organize, and prioritize. With these new tasks, I slowly left the development behind me.

This section is relatively short as I was heavily focused on my work during these two years. Hence, I "only" started two projects with some friends:

Ending my nerd path

I first wanted to call this chapter "To be continued" and add new chapters to my nerd path. But I think it won't be continued. I don't want to make my living as a developer. I love technology! I'll develop in my free time, learn new technologies, automate stuff I could do way quicker manually, be the number one contact for my family, friends, and colleagues to ask technical questions. For me, it's some kind of relaxation if I can develop after a stressful day.

However, I have taken a new path. I moved to Copenhagen to start my master's in Technology Entrepreneurship @DTU. I wanted to risk something, go out of my comfort zone, try something new, get to know inspiring people from all over the world, and just calm down a bit after my time at the Corona-Warn-App. However, I want to work on more meaningful projects from me. I want to find something with impact. It might sound cheesy, but I want to change something in the world, tackle the significant challenges, and leave a better world.

Right now, I'm already three and a half months into this program, and I love it. It's just an inspiring environment where everything seems to be possible. I already got my first megalomaniac idea. More on that later when I make that dream come true.

Wood picture: Andrey Haimin

Thank's to Niklas and Leo for proofreading.

]]>
https://harambasic.de/posts/pdf-cvresume-from-figma-template-with-auto-layout PDF CV/Resume from Figma template with Auto Layout https://harambasic.de/posts/pdf-cvresume-from-figma-template-with-auto-layout A simple one-page CV/Resume template that heavily relies on Auto Layout makes it super easy to adjust. Tue, 12 Jul 2022 00:00:00 GMT Motivation

If you look at my projects you can see parts of my history simplifying CVs. To quote me:

Why bother with Word or a graphics program when creating your CV? Hence, I designed and developed a tool with Vue 3 which takes a JSON and generates a CV from it. It is easy to maintain as new entries can be added to JSON and the CV will be updated automatically.

To be honest, I still like this programmatic approach, but I like pixel-perfect designs, which is hard to achieve with PDFs generated from HTML (at least to my knowledge). And Figma introduced Auto Layout, which makes it sooooo easy to adjust structured designs. That's when I started playing around with CVs in Figma.

CV template in Figma

As my history of CVs is known to friends, and my current CV is public, I was asked multiple times if people could borrow my CV to adjust it to their needs. Of course, they can! But if it's of interest to them, some other people might also benefit from it.

» Here it is - just create a copy.

How to adjust it

I thought about writing a comprehensive tutorial. But I think it's straightforward, and I also added some notes to the template so everybody can benefit from it. Therefore, please let me know if there are some "bugs" or if you don't understand something (@luka_harambasic).

Sources

]]>
https://harambasic.de/posts/quickly-copying-paths-to-the-terminal-on-macos Quickly copying paths to the terminal on macOS https://harambasic.de/posts/quickly-copying-paths-to-the-terminal-on-macos Quickly copying paths to the terminal on macOS Mon, 08 Aug 2022 00:00:00 GMT Intro

A friend once showed me this little tip. We were developing an Android app, and therefore we tested our apps frequently on different physical devices. The compiled app had very speakable and easy to type names... not! And then I got told that I easily can copy a complete file paths. With this I don't need to figure out where I'm right now in my terminal or where I need to navigate to. It might be obvious for those who know it, but sooo many people don't know. And everybody I told this was amazed.

How-To

You only have to select a file in Finder, press CMD + C to than paste it via CMD + V to your favourite terminal.

Here you can see it with a little script I use for compressing PDFs.

Copy and paste a file path via keyboard commands on macOs

]]>
https://harambasic.de/posts/mac-utility-must-haves Mac utility must haves https://harambasic.de/posts/mac-utility-must-haves Explore my essential Mac utilities, including clipboard enhancements, window management, and more. Mon, 01 Apr 2024 00:00:00 GMT Motivation

Over the last few years, I have repeatedly recommended these utilities and shortcuts to friends. Most of them are in developer roles. I used them while coding a lot, but I also used them for pure design and university-related work. The recommendations are stable and versatile. As you'll see below, I currently work as a Product Manager; therefore, most examples focus on that task field. But you can make it your own.

If you are interested in a video walking you thorugh these tools I can recommend Set up a Mac in 2024 for Power Users and Developers by Syntax, which covers almost everything I describe here.

The utilities

Every section will start with a bit of motivation of why I use it and then go over how I have set it up and how I use it.

Screenshots to clipboard

▶︎ macOS Screenshots

I don't know how many screenshots I take per day. They have one thing in common: they are only temporarily relevant. If it needs to be persisted, I handle it differently (e.g., save a website as a PDF). And I don't need such temporary data on my desktop or saved somewhere else. I need it in my clipboard to paste it somewhere, e.g., E-Mail, Slack, or Figma. I can't remember in which macOS version Apple introduced the current screenshot solution, but that was the time I ditched the way more powerful Snagit (I even paid for it). The built-in solution just works. But you need to set it up to do so.

TODO

  1. Open the Screenshot app.
  2. Make sure one of the three buttons on the left is selected.
  3. Click Options and select under Save to Clipboard.

From now on, you can take screenshots with COMMAND + SHIFT + 4, which will instantly be saved to your clipboard. You can then paste them as you normally would (COMMAND + V), making your workflow more efficient.

Also, an important remark: if you heavily rely on annotations, this might not be the best workflow for you. But if I need to annotate something, I just quickly paste it into Figma and add an arrow or box there.


Clipboard history

▶︎ Clipy

brew install --cask clipy

You might be asking yourself, what the hell is a clipboard manager? And I was like you, now I can't imagine my life without it. It allows you to access stuff you have been copying. This is helpful when you start doing something, need to jump on something else, copy stuff in between, and then need to continue what you didn't finish. With a clipboard manager, you still have all the things you copied previously accessible. Another use case is when I need to copy colors from one place to another, e.g., from a design file to my code editor; I only copy all the codes once and then paste them in whichever order I need them without switching back and forth between the applications. Windows has had it since Windows 10 was integrated, but Mac still misses this feature.

TODO

After you install Clipy, you can use it via SHIFT + COMMAND + C. It's just one additional keystroke to what you are used to. It has many customization options, but I only changed a little. I make sure that there are already copied items visible without the need to navigate to a folder first (see screenshot). Therefore, you need to set the Number of items placed inline to a decent number, I have it at 10.

A nice side effect is that it cleans the styles of the copied text. For example, copying something from VSCode to Outlook normally takes all the styling. But I can't think of a use case where I want that. Maybe I want the hierarchy of the text but not the styling. Everything should be Markdown.


Switching between windows

▶︎ AltTab

brew install --cask alt-tab

AltTab solves one of my main problems with window switching on macOS: COMMAND + TAB can't handle multiple instances of the same program. I tried multiple virtual desktops, switching between them with gestures, but this worked better with multiple monitors. Besides this benefit, AltTab adds more context to the open windows, a screenshot, and the instance's title.

ALtTab cycling through multiple windows

I highly recommend replacing the default ALT + COMMAND with AltTab. After the installation, it will guide you through the process of replacing it. Below, you can see my Appearance settings. That is the outcome of trial and error. You can also turn off the screenshots in the preview.

AlTab appearance of around 20 settings


Window management

▶︎ Rectangle

brew install --cask rectangle

After being able to switch between all the windows, they need to be organized. One goes to the left half, one to the right half. Another one needs to be maximized without going fullscreen - I don't like the macOS fullscreen and split mode.

Rectangle resizing multiple windows by dragging them to the side of the screen

Therefore, you either use your mouse to drag a window into one of the hot areas (you can define them in the settings) or double-click the title bar to maximize a window. But I would encourage you to use the keyboard shortcuts. I tend to use the mouse, but I would love to get used to the shortcuts.

Overview of the Rectangle shortcut settings


Better Spotlight

▶︎ Raycast

brew install --cask raycast

Everything besides this point is something I would recommend to everyone who works on a Mac daily. Raycast is a little bit more nerdy but can also be useful for everyone else. But what is it even? It is an extensible version of the default macOS Spotlight, which is already awesome. But Raycast can do more.

TODO

I use it in some ways, like Spotlight, e.g., to open all my applications, do simple calculations, and make simple currency conversions. But Raycast allows me to install extensions or to use my scripts. One default extension is the emoji search; this gives me the same way to add emojis in every application. I don't need to consider whether I'm in Slack, Notion, Jira, or Gmail. Besides this, here is a short list of other extensions:

  • Color Picker - Pick a color everywhere and get the HEX code on your clipboard.
  • Todoist - Create a task or search through all of them.
  • Shortcut - Use your macOS/iOS shortcuts, I have some tiny automations I want to share between my Mac and my iPhone. Therefore, having a Shortcut instead of a bash script or something similar that only works on my Mac is easier.
  • Calendar - See your next events and join calls.

I also use some custom commands for simple automations. For example, to open a specific website with some parameters, below is a simple example of a translation service. But as you can see, this is a bash script, so you can do whatever you want.

#!/bin/bash

# Required parameters:
# @raycast.schemaVersion 1
# @raycast.title DeepL
# @raycast.mode silent

# Optional parameters:
# @raycast.icon images/deepl.png
# @raycast.argument1 { "type": "text", "placeholder": "From", "percentEncoded": true }
# @raycast.argument2 { "type": "text", "placeholder": "To", "percentEncoded": true }
# @raycast.argument3 { "type": "text", "placeholder": "Text", "percentEncoded": true }
# @raycast.packageName Translations

# Documentation:
# @raycast.description Open DeepL
# @raycast.author Luka Harambasic
# @raycast.authorURL https://harambasic.de

# From & to are the typical language codes, like en, de, da etc.
open "https://www.deepl.com/translator#/$1/$2/$3"

TODO

It also offers solutions for screenshots, clipboard management, and window management. But I don't use these; I use the solutions described above, but hey, maybe that's something for you.

Useful shortcuts

I use shortcuts besides these utilities—besides copy and paste—on a daily basis.

CTRL + L—In Notion and Figma (I don't know if there are others), it copies the URL of what you currently have open directly to your clipboard. Quickly share a Notion page via Slack or link a Figma screen in a Jira ticket.

CTRL + K—This opens a similar command palette like Spotlight/Raycast in multiple applications and websites (e.g., Jira, Figma, Notion, Slack) that lets you perform all kinds of tasks, depending on the application.

Closing thoughts

Most of these tools have been shipped with Windows for years, which is strange because I need to go through all this every time I have to set up a new macOS. But thanks to Open Source solutions like Clipy, AltTab, and Rectangle macOS feels way more usable. So think about donating to them or using the other open-source solutions you use on a daily basis. And remember, besides monetary support, you also can contribute to the code base.

]]>