<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Jay Miracola - Clouds Are Metal]]></title><description><![CDATA[Jay Miracola - Clouds Are Metal]]></description><link>https://blog.miraco.la</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 21:33:43 GMT</lastBuildDate><atom:link href="https://blog.miraco.la/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Write a Kubernetes Controller With Zero Code]]></title><description><![CDATA[The Problem
Sometimes we want to have control loops that watch for state, and make changes based on that state. In Kubernetes that's a controller, but writing a Kubernetes controller in Go is a non-trivial task. It requires knowledge of Kubernetes, g...]]></description><link>https://blog.miraco.la/write-a-kubernetes-controller-with-zero-code</link><guid isPermaLink="true">https://blog.miraco.la/write-a-kubernetes-controller-with-zero-code</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[crossplane]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[AI]]></category><category><![CDATA[llm]]></category><category><![CDATA[ollama]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Wed, 14 Jan 2026 02:32:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/_JTF0Prc7jc/upload/f499cd94ec61faada08e45ba6ec7e063.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The Problem</strong></p>
<p>Sometimes we want to have control loops that watch for state, and make changes based on that state. In Kubernetes <a target="_blank" href="https://kubernetes.io/docs/concepts/architecture/controller/">that's a controller</a>, but writing a Kubernetes controller in Go is a non-trivial task. It requires knowledge of Kubernetes, golang, and sometimes lots of complex logic. What if there were a way we could author these controllers without code? We want to solve problems that are sometimes wildly complex to very trivial but either lack the knowledge of authoring and/or the time to complete the task. Imagine the example below as a thought experiment rather than a direct use case. Let's jump on the AI hype train together if not for just a moment to explore how it might solve our daily challenges. </p>
<p><strong>New Crossplane Primitives</strong></p>
<p>Recently Crossplane released v2 which among things like namespace scoped resources <a target="_blank" href="https://docs.crossplane.io/latest/operations/">also included operations</a>. Operations were made to solve a lot of day 2 problems like backups or even configuration validation. They can be extended to an enormous amount of use cases but today, we will use the <a target="_blank" href="https://docs.crossplane.io/latest/operations/watchoperation/">watch operation</a> to monitor and change a deployment based on my requirements. I will be using Ollama to run a locally running LLM, namely <code>gpt-oss:20b</code> in combination with the <a target="_blank" href="https://marketplace.upbound.io/functions/upbound/function-openai/v0.3.0">open-ai function</a> as it has been extended to allow calling any AI API that uses OpenAI’s API format. As grandpa used to say “a token saved is a token earned” or something like that. </p>
<p><strong>The Controller</strong></p>
<p>My example written at <a target="_blank" href="https://github.com/jaymiracola/configuration-english-controller">https://github.com/jaymiracola/configuration-english-controller</a>  will allow you to run an operation that will watch deployments in the default namespace and regardless of the deployment applied, ensure that they are all scaled to 3 replicas. As promised, all the plain english.</p>
<pre><code class="lang-plaintext">          systemPrompt: |-
            You are a Kubernetes controller implemented as an LLM.

            Goal:
            - Ensure any watched Deployment always runs at least 3 replicas.

            Rules:
            - If spec.replicas is missing or less than 3, set it to 3.
            - Otherwise, make no changes.
            - Output one or more fully-specified Kubernetes manifests as YAML.
            - Include metadata.name and metadata.namespace from the original resource.
            - Only output YAML manifests, separated by "---" if there are multiple.
          userPrompt: "Inspect the watched nginx Deployment resource and adjust it to satisfy the rules above."
          systemPrompt: |-
</code></pre>
<p>In order to run it, most of the steps are taken care of in the repository provided other than needing an LLM (Ollama, OpenAI, etc) to connect to. My example is setup currently for Ollama locally with no auth. Past that, all you need to do is the following:</p>
<p><code>git clone https://github.com/jaymiracola/configuration-english-controller.git &amp;&amp; cd configuration-english-controller</code></p>
<p>Edit the secret in the example folder with your credentials for your LLM.</p>
<p><code>up project run —local</code></p>
<p>The configuration will be packaged and applied to a locally created kind cluster</p>
<p><code>kubectl apply -f examples/secret.yaml</code></p>
<p>Now everything should be ready to go! Apply the example deployment with a single replica and watch the magic happen. The operation should show the deployment with a single replica, the LLM picks that up, makes a change to 3, and the process is complete. Change it again manually if you’d like to see it in action again. </p>
<p><code>kubectl apply -f examples/deployment.yaml</code>  </p>
<p><strong>Some Caveats</strong></p>
<p>As I stated before, this is simply a thought experiment and I have certainly taken some creative liberties around calling it a controller. It is in its simplest form something for you to look at and think about what else could be possible. Maybe an operation that denies changes to be applied on Fridays? A step in your Infrastructure that <a target="_blank" href="https://github.com/ytsarev/configuration-aws-database-ai/">watches for database resource utilization</a> and scales as needed? Hallucinations and non-deterministic behaviours become less problematic as LLMs and prompts mature. A world where we solve real problems while applying our institutional knowledge in organizations <a target="_blank" href="https://www.upbound.io/manifesto">may be closer than once thought</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Old hardware, New (AI) problems]]></title><description><![CDATA[What do we say to buying bleeding edge hardware for running AI workloads?
Not today! I have an old HP Z600 (2009!) and GPU that I wanted to use to run #Kubernetes, #Ollama, Open WebUI, and utilize NVIDIA’s gpu-operator. It has been a solid machine th...]]></description><link>https://blog.miraco.la/old-hardware-new-ai-problems</link><guid isPermaLink="true">https://blog.miraco.la/old-hardware-new-ai-problems</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[ollama]]></category><category><![CDATA[NVIDIA]]></category><category><![CDATA[open-webui]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Mon, 03 Feb 2025 02:38:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/hxwvWHmCdBM/upload/21fd6428f7fe08103f5320338a783796.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What do we say to buying bleeding edge hardware for running AI workloads?</p>
<p>Not today! I have an old HP Z600 (2009!) and GPU that I wanted to use to run #Kubernetes, #Ollama, Open WebUI, and utilize NVIDIA’s gpu-operator. It has been a solid machine through the years with dual socket Xeons, loads of ECC ram, and simply wont quit. It has run several hypervisors, OpenStack, OpenShift, and more! When I decided to plug in a GPU, and load up my AI stack, I had no idea the rabbit hole I would go down. Here is the short story; Ollama’s GPU runner by default uses the AVX instruction set which is not available in old CPUs. I briefly thought it was time to retire my old machine and buy something a little newer, but no! The kind Ollama devs added a build argument in their Dockerfile <code>--build-arg CUSTOM_CPU_FLAGS=</code> . Leaving the flag’s values empty builds <a target="_blank" href="https://hub.docker.com/r/jaymiracola/ollama-noavx">the GPU runner without AVX</a> allowing my beloved Z600 to live on, continuing to serve modern workloads.</p>
<p>Moral of the story? With a little ingenuity (and a helpful open-source community), old hardware can still punch above its weight in the AI era!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738548966190/278d4785-4ee5-4a4b-9945-fefa425344dd.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[BGP ,Cilium, and FRR: Top of Rack For All!]]></title><description><![CDATA[I recently came across a LinkedIn post talking about the above concepts and its trivialness to setup. The goal: Use Cilium's BGP capabilities to either expose a service or export the pod cidr and advertise its range to a peer. We are all on different...]]></description><link>https://blog.miraco.la/bgp-cilium-and-frr-top-of-rack-for-all</link><guid isPermaLink="true">https://blog.miraco.la/bgp-cilium-and-frr-top-of-rack-for-all</guid><category><![CDATA[frr]]></category><category><![CDATA[cilium]]></category><category><![CDATA[bgp]]></category><category><![CDATA[k3s]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Ubiquiti]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Fri, 15 Mar 2024 15:17:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/BiWM-utpVVc/upload/7f144c7b7ee2b4633d2c94b03069d2bf.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently came across a LinkedIn post talking about the above concepts and its trivialness to setup. The goal: Use Cilium's BGP capabilities to either expose a service or export the pod cidr and advertise its range to a peer. We are all on different chapters of our life's book, so I wanted to explain the setup a little more in order to possibly help someone out there add a feather to their hat!</p>
<p>Why would you want to expose a pod network directly using BGP? The concepts are relatively simple. The idea of ToR or top of rack is the idea that in the data center a rack has multiple servers in it and at the top is a switch that they all connect to. Then off to an aggregate it goes. In this scenario, we have no load balancers in between as Kubernetes is keen to do, nor do we need to expose node ports. Just straight connections via advertised routes directly to the applications. Why set this up at home? Its likely that any services you may be running are one-offs serving things like Plex, Pihole, etc. This makes it incredibly easy to connect to the applications directly.</p>
<h3 id="heading-the-setup">The Setup</h3>
<p>In my setup I will be using FRR on a UDM-SE and a Raspberry Pi running K3S and Cilium. Feel free to use a standalone nix box to setup FRR but know that you will also need to add some static routes to it. For my UDM, the FRR package was already installed! Under the hood, Ubiquiti uses it for its Magic VPN feature. I don't use it, so it was straight forward to enable the systemd service with my own custom configuration I will show below. For more details, <a target="_blank" href="https://chrisdooks.com/2023/06/26/configure-bgp-on-a-unifi-dream-machine-udm-v3-1-x-or-later/">Chris's blog here</a> can show you everything you need to do.</p>
<pre><code class="lang-plaintext">hostname UDM-SE
frr defaults datacenter
log file stdout
service integrated-vtysh-config
!
!
router bgp 65001
 bgp router-id 192.168.120.254
 neighbor 192.168.120.11 remote-as 65000 #raspberry pi
 neighbor 192.168.120.11 default-originate #raspberry pi
 !
 address-family ipv4 unicast
  redistribute connected
  redistribute kernel
  neighbor V4 soft-reconfiguration inbound
  neighbor V4 route-map ALLOW-ALL in
  neighbor V4 route-map ALLOW-ALL out
 exit-address-family
 !
route-map ALLOW-ALL permit 10
!
line vty
!
</code></pre>
<p>The above is my configuration for FRR. Straight forward besides my comments about where the raspberry pi single node k3s lives.</p>
<p>Now its on to Cilium and K3s. Note that you will need to disable flannel, servicelb, and network-policy. You can do this with a fresh install, setting environment variables, or by editing the systemd service. If you are running this on an existing installation, you will likely also need to remove the flannel vxlan. Run <code>ip link show</code> to verify its presence if you encounter a crash loop with Cilium.</p>
<pre><code class="lang-plaintext">ExecStart=/usr/local/bin/k3s \
    server \
        '--flannel-backend=none' \
        '--disable-network-policy' \
        '--disable=servicelb' \
</code></pre>
<p>Installing cilium with the binary and single flag for this use case is as follows <code>cilium install --set bgpControlPlane.enabled=true</code> . After a successful installation, its time to create the <code>CiliumBGPPeeringPolicy</code> . Below is my example with notes.</p>
<pre><code class="lang-plaintext">apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeeringPolicy
metadata:
  name: custom-policy
spec:
  virtualRouters:
  - exportPodCIDR: true # allows the pod CIDR to be advertised
    localASN: 65000
    neighbors:
    - connectRetryTimeSeconds: 120
      eBGPMultihopTTL: 1
      holdTimeSeconds: 90
      keepAliveTimeSeconds: 30
      peerASN: 65001 #FRR ASN
      peerAddress: 192.168.120.1/32 # FRR address
      peerPort: 179
</code></pre>
<h3 id="heading-validation">Validation</h3>
<p>On to validation. From the FRR side I run <code>vtysh -c 'show ip bgp'</code> and receive</p>
<pre><code class="lang-plaintext">   Network          Next Hop            Metric LocPrf Weight Path
*&gt; 10.0.0.0/24      192.168.120.11
</code></pre>
<p>From where my Cilium binary is installed with access to my K3s cluster I run <code>cilium bgp peers</code> and receive</p>
<pre><code class="lang-plaintext">Node     Local AS   Peer AS   Peer Address    Session State   Uptime     Family         Received   Advertised
pi       65000      65001     192.168.120.1   established     12h49m3s   ipv4/unicast   7          1
                                                                         ipv6/unicast   0          0
</code></pre>
<p>From here, if you had flannel installed previously you will likely need to restart the pods in order to get new Cilium CNI range. Run a curl or whatever you see fit and verify its working!</p>
<pre><code class="lang-plaintext"> $curl http://10.0.0.83:8080
  Hello World!
</code></pre>
]]></content:encoded></item><item><title><![CDATA[How conntrack Could Be Limiting Your k8s Gateway]]></title><description><![CDATA[Under high load in specific scenarios, a Kubernetes gateway may be limited by more than just its obvious CPU and Memory limits or requests if Karpenter is aggressively sizing the node (a different topic!). You may be hitting a wall in conntrack exhau...]]></description><link>https://blog.miraco.la/how-conntrack-could-be-limiting-your-k8s-gateway</link><guid isPermaLink="true">https://blog.miraco.la/how-conntrack-could-be-limiting-your-k8s-gateway</guid><category><![CDATA[conntrack]]></category><category><![CDATA[ipvs]]></category><category><![CDATA[netfilter]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[gateway]]></category><category><![CDATA[AWS]]></category><category><![CDATA[iptables]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Fri, 09 Feb 2024 17:54:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/FTKfX3xZIcc/upload/e711e9c5179b79fa76898afbc1cddf4d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Under high load in specific scenarios, a Kubernetes gateway may be limited by more than just its obvious CPU and Memory limits or requests if Karpenter is aggressively sizing the node (a different topic!). You may be hitting a wall in conntrack exhaustion.</p>
<p>For those uninitiated, conntrack, put simply, is a subsystem of the Linux kernel that tracks all network connections entering, exiting, or passing through the system, allowing it to monitor and manage the state of each connection, which is crucial for tasks like NAT (Network Address Translation), firewalling, and maintaining session continuity. It operates as part of Netfilter, the Linux kernel's framework for network packet filtering, which provides the underlying infrastructure for connection tracking, packet filtering, and network address translation. A quick explanation of the problem is if connection tracking reaches over the <code>conntrack_max</code> value (found with <code>sysctl net.netfilter.nf_conntrack_max</code> ) by long lived, stale, or an inundation of requests, your CPU and memory headroom will look fine but requests will be dropped.</p>
<pre><code class="lang-plaintext">$ sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 131072
</code></pre>
<h3 id="heading-how-to-monitor">How to monitor</h3>
<p>How can we monitor for this type of event? Or really many of the hardware and OS level metrics that are important to collect? Prometheus ships a collector called <a target="_blank" href="https://github.com/prometheus/node_exporter"><code>node_exporter</code></a> . By utilizing this, you will be able to track and monitor events related to conntrack as defined by the project <code>Shows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present).</code> If running on AWS, using a Nitro based image instance type, and using the ENA driver version 2.8.1 or newer, AWS has the capability of gathering these metrics into CloudWatch should you prefer.</p>
<h3 id="heading-ways-around-the-problem">Ways around the problem</h3>
<p>So how can we get around the issue? The most straight forward answer, in the context of AWS, would be to upgrade your EC2 size. AWS calculates and sets the conntrack table size based on a calculation of CPU, memory, and OS (32/64 bit). Throw more power at it! But wait, isn't that a monolith mentality?</p>
<pre><code class="lang-plaintext">root@ip-192-168-30-28:/# sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 262144 # m5.large

root@ip-192-168-98-251:/# sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 131072 # t2.micro
</code></pre>
<p>Another way is simply knowing your machine, traffic type, what it can handle through performance tests, and setting your <code>conntrack_max</code> accordingly. Using the kube-proxy ConfigMap, we can declaratively set the max as seen in <a target="_blank" href="https://kubernetes.io/docs/reference/config-api/kube-proxy-config.v1alpha1/#kubeproxy-config-k8s-io-v1alpha1-KubeProxyConntrackConfiguration">the Kubernetes docs</a>.</p>
<pre><code class="lang-plaintext">    conntrack:
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
</code></pre>
<p>A less likely, possibly excessive for this issue, design change that would need consideration in various elements of your architecture is the switch from iptables to IPVS. <a target="_blank" href="https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-modes">Shifting from iptables to IPVS</a> for load balancing addresses the bottleneck of hitting the maximum connection tracking capacity. Unlike iptables, which filters and inspects packets and relies on connection tracking, IPVS efficiently routes traffic to backends with load balancing algorithms, <a target="_blank" href="https://www.tigera.io/blog/comparing-kube-proxy-modes-iptables-or-ipvs/">bypassing the exhaustive state tracking.</a></p>
<p>Least likely and last, the use of tunnels. By using an IP tunnel and encapsulating the traffic, the far end only sees the tunnel as the tracked connection. This is more feasible of an option in public facing proxies hosted <em>externally</em> of the Kubernetes cluster (with the same conntrack adjustments), and/or not serving a public gateway.</p>
<h3 id="heading-wrapping-up">Wrapping up</h3>
<p>Like anything in IT, its important to monitor everything you can while being mindful of the pitfalls of cardinality and the inundation of metrics that brings. Conntrack only being a small piece of the larger puzzle of issues we solve everyday!</p>
]]></content:encoded></item><item><title><![CDATA[A Tale of Two VLANS]]></title><description><![CDATA[When handling sensitive traffic, in my scenario DNS, its sometimes necessary to isolate the traffic from one another. In this example, I wanted one DNS server on my Kubernetes cluster to serve two VLANs but I didn't want those VLANs to have any acces...]]></description><link>https://blog.miraco.la/a-tale-of-two-vlans</link><guid isPermaLink="true">https://blog.miraco.la/a-tale-of-two-vlans</guid><category><![CDATA[vlans]]></category><category><![CDATA[ksniff]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[metallb]]></category><category><![CDATA[Wireshark]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Mon, 05 Feb 2024 16:04:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/6vdNPL3a5SE/upload/c799b2001e421ee19a1dd6247173b9c3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When handling sensitive traffic, in my scenario DNS, its sometimes necessary to isolate the traffic from one another. In this example, I wanted one DNS server on my Kubernetes cluster to serve two VLANs but I didn't want those VLANs to have any access to one another. I also wanted a single pane of glass to observe the requests on both VLANs. I also have the VLANs on a single trunk to the server in question. Instead of splitting the service, I split the traffic using MetalLB.</p>
<p>Getting started, I needed to choose BGP or ARP. I chose Layer 2 as the router in question isn't capable out of the box with BGP. Next I needed to configure the server interface with the second, non-default VLAN tag 12 and map it to a port.</p>
<pre><code class="lang-plaintext">auto eth0.12
iface eth0.12 inet static
    address 172.16.12.10
    netmask 255.255.255.0
    network 172.16.12.1
    broadcast 172.16.12.255
    gateway 172.16.12.1
    dns-nameservers 1.1.1.1 8.8.4.4
    vlan_raw_device eth0
</code></pre>
<p>Now the server should be able to reach out of the same physical interface and reach both the default VLAN and the newly configured VLAN 12 via interface <code>eth0.12</code> .</p>
<pre><code class="lang-plaintext">$ arping 172.16.12.5 -I eth0.12
ARPING 172.16.12.5
60 bytes from 9c:30:5b:06:6d:f5 (172.16.12.5): index=0 time=117.228 msec
56 bytes from 9c:30:5b:06:6d:f5 (172.16.12.5): index=1 time=39.202 msec
56 bytes from 9c:30:5b:06:6d:f5 (172.16.12.5): index=2 time=132.372 msec
56 bytes from 9c:30:5b:06:6d:f5 (172.16.12.5): index=3 time=72.073 msec

$ ping 172.16.12.5 -I eth0.12
PING 172.16.12.5 (172.16.12.5) from 172.16.12.10 eth0.12: 56(84) bytes of data.
64 bytes from 172.16.12.5: icmp_seq=1 ttl=255 time=56.1 ms
64 bytes from 172.16.12.5: icmp_seq=2 ttl=255 time=8.98 ms
64 bytes from 172.16.12.5: icmp_seq=3 ttl=255 time=13.5 ms
</code></pre>
<p>First I need to tell metalLB about the IP pool reservations I've carved out for it on my network so it only pulls from the selected range. Ill setup two pools instead of adding another string so I can appropriately tie them to the L2 advertisements in the next step.</p>
<pre><code class="lang-plaintext">apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: public-pool
  namespace: metallb
spec:
  addresses:
  - 172.16.12.10-172.16.12.15
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: private-pool
  namespace: metallb
spec:
  addresses:
  - 192.168.120.20-192.168.120.30
</code></pre>
<p>Now I need to setup the L2 advertisements and tie them to the correct interfaces. Ill also reference the IP Pools above to the correct interface.</p>
<pre><code class="lang-plaintext">apiVersion: v1
items:
- apiVersion: metallb.io/v1beta1
  kind: L2Advertisement
  metadata:
    name: public-l2
    namespace: metallb
  spec:
    interfaces:
    - eth0.12
    ipAddressPools:
    - public-pool
kind: List
---
apiVersion: v1
items:
- apiVersion: metallb.io/v1beta1
  kind: L2Advertisement
  metadata:
    name: private-l2
    namespace: metallb
  spec:
    interfaces:
    - eth0
    ipAddressPools:
    - private-pool
kind: List
</code></pre>
<p>Now MetalLB's speaker logs should show the appropriate pools and advertisements tied to the correct interface. Without this , the speaker logs indicate fuzzy logic is used and it doesn't work out well in this configuration. Moving on, I need to now configure my services so that the use the IP range when calling <code>type: LoadBalancer</code> on the separate interfaces. For brevity, I'll only show an example of the VLAN tagged interface. The other service also matches on the same application, ports, etc but serves the default VLAN.</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Service
metadata:
  labels:
    app: dns-server
  name: dns-public
  namespace: dns-server
spec:
  allocateLoadBalancerNodePorts: true
  externalTrafficPolicy: Local
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerIP: 172.16.12.10
  ports:
  - name: dns
    port: 53
    targetPort: dns
  - name: dns-udp
    port: 53
    protocol: UDP
    targetPort: dns-udp
  selector:
    app: dns-server
    release: dns-server
  type: LoadBalancer
</code></pre>
<p>From here, everything is up and working. Verifying would be as simple as using <code>nslookup</code> to the appropriate IPs or similar with <code>arping</code> again to be certain our L2 advertisements are working. Instead, lets use a Kubernetes tool called <a target="_blank" href="https://github.com/eldadru/ksniff"><code>ksniff</code></a> to observe the traffic on the speaker in cluster. Here is an example of what that command might look like.</p>
<pre><code class="lang-plaintext">kubectl sniff -n metallb metallb-speaker -p
</code></pre>
<p>In my scenario, it was a bit more involved. Because I am running k3s on ARM I needed to specify the containerd socket <em>and</em> use arm compatible images. Im also going to specify the traffic Im interested in. Just in case you're interested, heres what that looks like!</p>
<pre><code class="lang-plaintext">kubectl sniff -n metallb argo-metallb-speaker-nprwp --socket /run/k3s/containerd/containerd.sock -p --image ghcr.io/nefelim4ag/ksniff-helper:v4 --tcpdump-image ghcr.io/nefelim4ag/tcpdump:latest -f "arp"
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706897555928/1a5829b0-2a98-4e9c-9cc4-2bb8e40a8a20.png" alt class="image--center mx-auto" /></p>
<p>Above we can see ARP being successfully requested and replied to for the appropriate IP and MAC. Now the speaker is successfully routing traffic via the appropriate services to the same DNS server in Kubernetes so all external requests can be observed in a single place.</p>
]]></content:encoded></item><item><title><![CDATA[App of Apps of Infra]]></title><description><![CDATA[Declarative infrastructure (IaC) by any means is necessary for the modern enterprise. A single source of truth in regards to not only applications but infrastructure and configuration is a must. Housing it all in git and adding necessary barriers fur...]]></description><link>https://blog.miraco.la/app-of-apps-of-infra</link><guid isPermaLink="true">https://blog.miraco.la/app-of-apps-of-infra</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[crossplane]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[DigitalOcean]]></category><category><![CDATA[kind]]></category><category><![CDATA[k8s]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Jay Miracola]]></dc:creator><pubDate>Wed, 31 Jan 2024 22:45:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/uAWRPtZ6n0s/upload/826114d45c6b503a5c0e7105c044c8e8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Declarative infrastructure (IaC) by any means is necessary for the modern enterprise. A single source of truth in regards to not only applications but infrastructure and configuration is a must. Housing it all in git and adding necessary barriers further allows access control, multi-tenancy, and configuration boundaries via CI/CD and Gitops. Below, we will use Argo's App of Apps pattern to further describe the state of our Digital Ocean cluster using Crossplane. For the sake of brevity, I'll get straight to the needed components.</p>
<p>First lets stand up our bootstrap cluster using Kind</p>
<pre><code class="lang-plaintext">kind create cluster --name bootstrap
</code></pre>
<p>Next go ahead and fork, copy, whatever you wish in order to get the necessary files I've shared in this repository <a target="_blank" href="https://gitlab.com/jaymiracola/app-of-infra">https://gitlab.com/jaymiracola/app-of-infra</a>. After you've done so we will install ArgoCD followed by kicking off our app-of-apps pattern by applying the first application which is Argo itself. From there forward it will be managed via git.</p>
<pre><code class="lang-plaintext">helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argo argo/argo-cd -n argocd --create-namespace --set version=5.51.4

kubectl apply -f applications.yaml
</code></pre>
<p>Now ArgoCD and Crossplane have been installed and are being defined entirely from our git repository. It's time to define the infrastructure.</p>
<pre><code class="lang-plaintext">kubectl get applications -A
</code></pre>
<pre><code class="lang-plaintext">NAMESPACE   NAME           SYNC STATUS   HEALTH STATUS
argocd      applications   Synced        Healthy
argocd      argocd         Synced        Healthy
argocd      crossplane     Synced        Healthy
argocd      infra          Synced        Healthy
</code></pre>
<p>Next, we will define some infrastructure in Digital Ocean. Why Digital Ocean vs AWS, GCP, or Azure? Simple, I use them all the time for low cost services I use personally! Of course, from here you could define whatever you'd like with Crossplane so the cloud and infrastructure is up to you.</p>
<p>Now we need to start defining Providers and Provider Configurations to Crossplane to configure which Cloud Providers we want and the keys to allow Crossplane to configure them on our behalf. I've already added the Digital Ocean Provider so now we need to configure access.</p>
<p>I'll add the following manifest to my <code>/app-of-infra/applications/infra/manifests/</code> folder as I've already defined and declared it as an application that ArgoCD is tracking. You can get a Digital Ocean token <a target="_blank" href="https://docs.digitalocean.com/reference/api/create-personal-access-token/">here</a>.</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Secret
metadata:
  namespace: crossplane
  name: provider-do-secret
type: Opaque
data:
  token: BASE64ENCODED_PROVIDER_CREDS
---
apiVersion: do.crossplane.io/v1alpha1
kind: ProviderConfig
metadata:
  name: do-config
  namespace: crossplane
spec:
  credentials:
    source: Secret
    secretRef:
      namespace: crossplane
      name: provider-do-secret
      key: token
</code></pre>
<p><strong>Important!</strong> Do NOT have this in a public repo. There is a reason this part of the instruction is omitted from my demonstration repository.</p>
<p>Last, it's time to declare infrastructure! As an example I will add the following configuration in <code>/app-of-infra/applications/infra/manifests/</code> to create a Kubernetes cluster in my Digital Ocean account. More examples of different Digital Ocean infrastructure <a target="_blank" href="https://github.com/crossplane-contrib/provider-digitalocean/tree/main/examples">can be found here</a>.</p>
<pre><code class="lang-plaintext">apiVersion: kubernetes.do.crossplane.io/v1alpha1
kind: DOKubernetesCluster
metadata:
  name: do-cluster
  namespace: infra
spec:
  providerConfigRef:
    name: do-config
  forProvider:
    region: nyc3
    version: 1.29.0-do.0
    nodePools:
      - size: s-1vcpu-2gb #lowest tier
        count: 1 #cost cutting for demo
        name: worker-pool
    maintenancePolicy:
      startTime: "03:00"
      day: sunday
    autoUpgrade: true
    surgeUpgrade: false
    highlyAvailable: false
</code></pre>
<p>Now you have successfully declared the entirety of your bootstrap cluster all the way into platform infrastructure in a not only declarative but also idempotent pattern! From here you can create more infrastructure as needed. You may also be interested in extending the control plane using Crossplane's Composite Resource Definitions allowing SRE teams to abstract infra <em>with</em> apps (and more) to easily consumable APIs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1706740439553/99fdbd82-370b-41b9-a8aa-c9728edb645b.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item></channel></rss>