🔐 WireGuard Split-Tunnel VPN System

Conceptual & Architectural Guide - A Comprehensive Overview of Design, Architecture, and Data Flow

🎯 System Overview

This WireGuard VPN system implements a sophisticated split-tunnel architecture that intelligently routes different clients through different network paths based on their IP addresses. The system provides granular control over which traffic flows through a commercial VPN (OpenVPN provider) and which traffic goes directly to the internet, all while maintaining a unified WireGuard VPN interface for clients.

🎓 Key Concept: Split-Tunnel VPN

Unlike traditional "all-or-nothing" VPN setups where all traffic either goes through the VPN or doesn't, this system creates selective routing where different clients connected to the same WireGuard server can have their traffic routed through completely different paths based solely on their IP address assignment. This allows for optimized performance, cost control, and flexible privacy policies within a single infrastructure.

Core Design Principles

🏗️ High-Level Architecture

📊 System Architecture Diagram

WireGuard Split-Tunnel VPN Architecture VPN Clients (10.4.0.0/24) Direct Routed IPs: 10.4.0.0-127 Skip VPN VPN Routed IPs: 10.4.0.128-255 Via OpenVPN provider DNS Queries Port 53 (All IPs) Via dnsmasq→VPN Host Namespace - 4 Network Components WireGuard wg0 10.4.0.1/24 • UDP 45822 Decrypts client packets Sends to engine dnsmasq DNS 10.4.0.1:53 Resolves domains → Cloudflare 1.1.1.2 ens5 Public NIC 10.0.1.191 (AWS IP) Direct Internet exit route veth-wg Bridge 10.200.1.1/24 Connects to NS via veth pair Routing Decision Engine 🔀 Central Router: iptables + policy-based routing Analyzes all packets | Classifies by source IP range IPs 0-127 → ens5 direct | IPs 128-255 + DNS → veth-wg to namespace | Uses fwmark: 0x200 (VPN) | 0x100 (DNS) VPN Namespace (vpn_ns) - 4 Components VPN Traffic veth-vpn (NS) 10.200.1.2/24 Receives traffic from host bridge OpenVPN tun0 OpenVPN provider Connected Full encryption NS Routing Table default route via 10.200.1.1 → tun0 (OpenVPN provider) NS iptables NAT MASQUERADE all packets to OpenVPN provider IP address External Destinations 🌐 Direct Internet (IPs 0-127) Route: ens5 (10.0.1.191) → AWS public internet Low latency • No encryption • AWS IP visible to destination Use for: Internal services, speed-critical traffic Example: logs.internal.company.com 🔒 VPN Exit (IPs 128-255 + DNS) Route: OpenVPN tun0 → OpenVPN provider endpoint Full encryption • OpenVPN provider IP shown • AWS IP hidden Use for: Privacy-critical traffic, public internet Example: api.public-website.com Direct Route Encrypted VPN

🧩 Core Components

WireGuard Interface (wg0)

Purpose: Provides encrypted VPN tunnel endpoint for client connections

IP: 10.4.0.1/24

Port: UDP 45822

Key Features:

  • Modern cryptography (Noise protocol)
  • Low latency overhead
  • No routing table (Table = off)
  • MTU: 1420 bytes

dnsmasq DNS Server

Purpose: DNS resolution with custom local domains and upstream forwarding

Listens On:

  • 127.0.0.1 (localhost)
  • 10.4.0.1 (WireGuard)
  • 10.200.1.1 (veth bridge)

Upstream DNS: Cloudflare (1.1.1.2, 1.0.0.2) and Google (8.8.8.8, 8.8.4.4)

Custom Records:

  • vault.raff.local → 10.4.0.7
  • rafflab.internal → 10.4.0.3

Network Namespace (vpn_ns)

Purpose: Isolated network environment for VPN-routed traffic

Why Needed: Prevents routing conflicts between direct and VPN paths

Contains:

  • OpenVPN client process
  • tun0 interface (VPN tunnel)
  • Separate routing table
  • Independent iptables rules

Virtual Ethernet Pair (veth)

Purpose: Connects host namespace to VPN namespace

Host End: veth-wg (10.200.1.1/24)

Namespace End: veth-vpn (10.200.1.2/24)

Function: Acts as a virtual network cable between namespaces

OpenVPN Client

Purpose: Connects to commercial VPN provider (OpenVPN provider)

Runs Inside: vpn_ns namespace

Server: il66.OpenVPN provider.com ()

Protocol: UDP

Interface: tun0 (dynamically assigned IP)

iptables / netfilter

Purpose: Packet filtering, NAT, and marking

Tables Used:

  • nat: Address translation
  • mangle: Packet marking for routing
  • filter: Forwarding rules

Policy Routing Engine

Purpose: Routes packets to different routing tables based on marks and source

Custom Table: Table 200

Uses: fwmark-based routing decisions

Physical Network Interface (ens5)

Purpose: AWS EC2 instance's primary network interface

IP: 10.0.1.191 (private AWS VPC IP)

Function: Gateway to internet for direct-routed traffic

🌐 IP Address Allocation Strategy

💡 The Split Allocation Model

The system divides the 10.4.0.0/24 subnet into two equal halves. This binary split is the fundamental decision point for all routing logic in the system. Your IP address determines your entire network path.

IP Range Subnet Count Routing Path Use Case
10.4.0.0 - 10.4.0.127 10.4.0.0/25 128 IPs Direct Internet Low-latency applications, streaming, gaming, trusted services
10.4.0.128 - 10.4.0.255 10.4.0.128/25 128 IPs Through OpenVPN provider Privacy-sensitive browsing, geo-restricted content, untrusted networks

Special Reserved Addresses

IP Address Purpose Component
10.4.0.1 WireGuard Server / DNS Server wg0 interface, dnsmasq
10.4.0.3 Internal Service (rafflab.internal) Custom DNS record
10.4.0.7 Internal Service (vault.raff.local) Custom DNS record
10.200.1.1 Veth Host Side Bridge to namespace
10.200.1.2 Veth Namespace Side Inside vpn_ns
10.0.1.191 AWS EC2 Instance IP ens5 interface (private VPC)

🔄 Data Flow & Packet Routing

Understanding how packets flow through this system is crucial to grasping its architecture. The routing decision happens at the server based on the packet's source IP, and clients are completely unaware of which path their traffic takes.

Scenario 1: Direct Internet Path (Client with IP 10.4.0.50)

Step 1: Packet Arrival

Client (10.4.0.50) sends HTTP request to google.com (172.217.164.46)

Source: 10.4.0.50:54321 → Destination: 172.217.164.46:443

Step 2: WireGuard Decryption

Packet arrives on wg0 interface, WireGuard decrypts it

Packet enters host network namespace with original source IP intact

Step 3: Routing Decision

Source IP 10.4.0.50 is in range 10.4.0.0-127 (non-VPN range)

iptables does not mark this packet (no fwmark 0x200)

Policy routing rules are not triggered

Step 4: Direct Forwarding

Packet matches iptables FORWARD rule:

iptables -A FORWARD -i wg0 -s 10.4.0.0/25 -o ens5 -j ACCEPT

Packet is forwarded from wg0 to ens5 (AWS network interface)

Step 5: NAT Translation

Packet hits NAT POSTROUTING rule:

iptables -t nat -A POSTROUTING -s 10.4.0.0/25 -o ens5 -j MASQUERADE

Source IP changed: 10.4.0.50 → 10.0.1.191 (AWS instance IP)

Connection tracking records translation for return packets

Step 6: Internet Egress

Packet exits via ens5 to AWS infrastructure

AWS performs additional NAT: 10.0.1.191 → Public Elastic IP

Packet reaches google.com directly from AWS data center

Step 7: Return Path

Response from google.com arrives at AWS Elastic IP

AWS NAT translates back to 10.0.1.191

iptables connection tracking reverses MASQUERADE: 10.0.1.191 → 10.4.0.50

Packet forwarded to wg0 interface

WireGuard encrypts and sends to client

📈 Direct Internet Flow Diagram

1. Client (10.4.0.50) Sends HTTP request to google.com 2. wg0 Interface Decrypts Source IP: 10.4.0.50 (in 10.4.0.0-127) Action: No packet marking 3. iptables FORWARD Chain Forward: wg0 → ens5 (ACCEPT) Uses main routing table 4. iptables NAT (POSTROUTING) Source: 10.4.0.50 → 10.0.1.191 (MASQUERADE) Connection tracking enabled 5. ens5 Interface AWS Network Interface (10.0.1.191) Egress to Internet INTERNET (Direct Path) Response returns via reverse NAT Key Facts: No packet marking - uses main routing table Fastest path - direct to AWS infrastructure Privacy: AWS/ISP can see destination Best for: Low-latency, non-sensitive traffic

Scenario 2: VPN-Routed Path (Client with IP 10.4.0.200)

Step 1: Packet Arrival

Client (10.4.0.200) sends HTTP request to google.com (172.217.164.46)

Source: 10.4.0.200:54321 → Destination: 172.217.164.46:443

Step 2: WireGuard Decryption

Packet arrives on wg0 interface, WireGuard decrypts it

Packet enters host network namespace

Step 3: Packet Marking

Source IP 10.4.0.200 is in range 10.4.0.128-255 (VPN range)

iptables mangle table marks packet:

iptables -t mangle -A PREROUTING -i wg0 -m iprange --src-range 10.4.0.128-10.4.0.255 -j MARK --set-mark 0x200

Packet now has fwmark 0x200 (512 in decimal)

Step 4: Policy Routing Lookup

Kernel checks routing policy database (RPDB):

ip rule add fwmark 0x200 table 200 priority 200

Packet with fwmark 0x200 must use routing table 200

Table 200 has default route:

ip route add default via 10.200.1.2 dev veth-wg table 200

Decision: Send packet to 10.200.1.2 via veth-wg

Step 5: Namespace Transition

Packet travels through veth-wg (host end) to veth-vpn (namespace end)

Packet crosses into vpn_ns network namespace

Now operating in isolated routing environment

Step 6: Namespace NAT

Inside vpn_ns, packet hits NAT rule:

iptables -t nat -A POSTROUTING -m iprange --src-range 10.4.0.128-10.4.0.255 -o tun0 -j MASQUERADE

Source IP changed: 10.4.0.200 → tun0 IP (VPN-assigned IP)

Step 7: OpenVPN Encryption

Packet sent to tun0 interface

OpenVPN client encrypts packet

Outer packet: Source = 10.200.1.2, Destination = (OpenVPN provider)

Step 8: Return to Host Namespace

Encrypted OpenVPN packet exits vpn_ns via veth-vpn

Arrives in host namespace at veth-wg

Routing decision: destination via ens5

Step 9: Internet Egress via VPN

Packet exits via ens5 to AWS network

AWS routes to OpenVPN provider server ()

OpenVPN provider decrypts, sees original request to google.com

OpenVPN provider forwards to google.com from VPN exit node

Step 10: Return Path

Response from google.com → OpenVPN provider exit node

OpenVPN provider encrypts, sends to OpenVPN client

Packet arrives at ens5, forwarded to veth-wg

Enters vpn_ns, OpenVPN decrypts to tun0

NAT reverse translation: tun0 IP → 10.4.0.200

Exits namespace via veth-vpn to veth-wg

Policy routing returns to wg0

WireGuard encrypts and sends to client

📈 VPN-Routed Flow Diagram

HOST NAMESPACE 1. Client (10.4.0.200) sends request Source: 10.4.0.200 (VPN client range) 2. wg0 Interface Decrypts Source: 10.4.0.200 is in 10.4.0.128-255 Packet enters host namespace 3. iptables MARK (mangle) Set fwmark 0x200 (mark = 512 in decimal) Packet marked for VPN routing 4. Policy Routing (RPDB) fwmark 0x200 → Table 200 → via 10.200.1.2 Packet sent through veth to namespace VPN NAMESPACE (vpn_ns) 5. veth-vpn: Packet enters namespace Crosses into isolated network environment 6. Namespace NAT (POSTROUTING) 10.4.0.200 → VPN-assigned IP MASQUERADE on tun0 interface 7. OpenVPN Encryption tun0 encrypts packet Destination: OpenVPN provider exit NS 8. ens5 Interface Encrypted packet exits to OpenVPN provider server Via AWS infrastructure INTERNET (VPN Exit Node) Appears as coming from OpenVPN provider IP

Scenario 3: Internal Traffic (Client-to-Client)

Complete Local Path

When Client A (10.4.0.50) communicates with Client B (10.4.0.200):

  • Packet arrives on wg0 from Client A
  • Destination 10.4.0.200 matches policy rule:
  • ip rule add from 10.4.0.0/24 to 10.4.0.0/24 table main priority 100
  • Uses main routing table, which has direct route to wg0
  • iptables allows peer-to-peer:
  • iptables -A FORWARD -i wg0 -o wg0 -s 10.4.0.0/24 -d 10.4.0.0/24 -j ACCEPT
  • Packet forwarded from wg0 back to wg0
  • WireGuard encrypts and sends to Client B
  • No VPN or internet transit involved

🌍 DNS Resolution Architecture

DNS resolution in this system is complex because it must handle three distinct scenarios: local domain resolution, VPN client DNS privacy, and the server's own DNS queries that need to traverse the VPN.

DNS Server Configuration (dnsmasq)

The dnsmasq server provides:

DNS Resolution Paths

Path 1: Non-VPN Client DNS Query (Client 10.4.0.50)

Query: Client 10.4.0.50 queries "google.com"

Step 1: DNS query sent to 10.4.0.1:53 (dnsmasq)

Step 2: dnsmasq checks local records (not found)

Step 3: dnsmasq forwards to upstream: 1.1.1.2 or 8.8.8.8

Step 4: DNS query exits via ens5 directly (source: 10.0.1.191)

Step 5: Response returns, dnsmasq caches and replies to client

Privacy Level: Cloudflare/Google sees query from AWS IP (10.0.1.191)

Path 2: VPN Client DNS Query (Client 10.4.0.200)

Query: Client 10.4.0.200 queries "google.com"

Step 1: DNS query sent to 10.4.0.1:53 (dnsmasq)

Step 2: dnsmasq checks local records (not found)

Step 3: dnsmasq creates upstream query to 1.1.1.2:53

Step 4: Query packet marked by iptables:

iptables -t mangle -A OUTPUT -p udp --dport 53 -d 1.1.1.2 -j MARK --set-mark 0x100

Step 5: Marked packet routed via table 200 (through namespace)

Step 6: Packet enters vpn_ns via veth

Step 7: NAT applied inside namespace:

iptables -t nat -A POSTROUTING -s 10.0.1.191 -d 1.1.1.2 -p udp --dport 53 -j SNAT --to-source 10.200.1.1

Step 8: Packet exits tun0 through OpenVPN to OpenVPN provider

Step 9: OpenVPN provider forwards to 1.1.1.2, receives response

Step 10: Response returns through VPN tunnel, NAT reversed

Step 11: dnsmasq receives response, replies to client 10.4.0.200

Privacy Level: Cloudflare sees query from OpenVPN provider exit IP (true privacy)

Path 3: Local Domain Query (Any Client)

Query: Any client queries "vault.raff.local"

Step 1: DNS query sent to 10.4.0.1:53

Step 2: dnsmasq checks local records (FOUND)

Step 3: dnsmasq immediately responds with 10.4.0.7

No upstream query needed

Client routes to 10.4.0.7 as internal traffic (never leaves WireGuard)

🌐 DNS Resolution Flow Comparison

DNS QUERY: "google.com" - Privacy Comparison ① Non-VPN Client (10.4.0.50) DNS Flow Path: • Client queries google.com • → dnsmasq (10.4.0.1) • → Direct via ens5 • → Cloudflare 1.1.1.2 No VPN, no namespace Best for: ✓ Low-latency traffic ⚠️ LOW Privacy AWS IP Exposed Your real IP: 10.0.1.191 What Cloudflare DNS sees: ✗ Your AWS public IP address ✗ Can correlate ALL non-VPN queries ✗ Knows your instance location ✗ Tracks your browsing patterns ② VPN Client (10.4.0.200) DNS Flow Path: • Client queries google.com • → dnsmasq (10.4.0.1) • → Marked with fwmark 0x100 • → veth0 into vpn_ns • → OpenVPN tun0 interface • → Cloudflare via OpenVPN provider Through encrypted tunnel Best for: ✓ Privacy-sensitive browsing ✅ HIGH Privacy VPN IP Hidden Exit via OpenVPN provider node What Cloudflare DNS sees: ✓ OpenVPN provider exit node IP only ✓ Cannot link to your AWS instance ✓ Mixed with thousands of VPN users ✓ True anonymity – untraceable ✓ Encrypted end-to-end ③ Local Domain (vault.raff.local) DNS Flow Path: • Client queries vault.raff.local • → dnsmasq (10.4.0.1) • → Local A record match • → Returns 10.4.0.7 No external query made Best for: ✓ Internal services & lab 🔒 PERFECT Privacy 100% Internal Never leaves dnsmasq What external DNS sees: ✓ NOTHING – no external query sent ✓ Resolved entirely within WireGuard ✓ Zero external visibility ✓ Completely private and secure

Why DNS Queries Are Routed Through VPN

⚠️ DNS Leak Prevention

The Problem: Even if your HTTP traffic goes through a VPN, DNS queries can "leak" outside the VPN if not carefully handled. This means your DNS provider (like Cloudflare or Google) can see what websites you're visiting, defeating much of the VPN's privacy benefit.

The Solution: This system marks DNS queries from dnsmasq destined for upstream servers with fwmark 0x100, forcing them to route through the VPN namespace and exit via OpenVPN provider. This ensures that VPN clients' DNS resolution is also anonymized.

Technical Detail: The DNS query marking is very specific - only packets going TO port 53 (DNS queries) are marked, not responses coming FROM port 53. This prevents routing loops.

📦 Network Namespace Architecture

Network namespaces are a Linux kernel feature that provides complete network stack isolation. This system uses a namespace called vpn_ns to create a completely separate networking environment for VPN-routed traffic.

Why Namespaces Are Essential

🔧 The Routing Conflict Problem

Without namespaces, you face an impossible routing conflict:

  • The OpenVPN client needs a default route pointing to the VPN (tun0)
  • But direct-internet traffic needs a default route pointing to ens5
  • Linux can only have ONE default route per routing table

Solution: Run OpenVPN in a namespace with its own routing table. The namespace has a default route to tun0, while the host keeps its default route to ens5. Traffic is selectively sent into the namespace via policy routing.

Namespace Components

Component Inside Namespace? Purpose
OpenVPN Process ✅ Yes Runs entirely within vpn_ns, only sees namespace network
tun0 Interface ✅ Yes Created by OpenVPN inside namespace
veth-vpn (namespace end) ✅ Yes Connection point to host namespace
Namespace Routing Table ✅ Yes Default route: via tun0
Namespace iptables ✅ Yes Separate NAT and forwarding rules
veth-wg (host end) ❌ No Host namespace bridge to vpn_ns
WireGuard (wg0) ❌ No Remains in host namespace
ens5 ❌ No Physical NIC stays in host

Namespace Lifecycle

Creation (wg0-up.sh)

  1. Create namespace: ip netns add vpn_ns
  2. Create veth pair: ip link add veth-wg type veth peer name veth-vpn
  3. Move one end into namespace: ip link set veth-vpn netns vpn_ns
  4. Configure IPs on both ends
  5. Start OpenVPN inside namespace: ip netns exec vpn_ns openvpn ...
  6. Configure routes inside namespace
  7. Configure iptables inside namespace

Teardown (wg0-down.sh)

  1. Stop OpenVPN process (kill PID)
  2. Remove iptables rules inside namespace
  3. Delete veth pair (automatically removes both ends)
  4. Delete namespace: ip netns del vpn_ns
  5. Clean up DNS config: rm -rf /etc/netns/vpn_ns

Viewing Namespace State

🔍 Useful Commands for Namespace Inspection

# List all namespaces
ip netns list

# Execute command in namespace
ip netns exec vpn_ns 

# View namespace interfaces
ip netns exec vpn_ns ip addr show

# View namespace routing table
ip netns exec vpn_ns ip route show

# View namespace iptables
ip netns exec vpn_ns iptables -L -n -v
ip netns exec vpn_ns iptables -t nat -L -n -v

# Check OpenVPN connection
ip netns exec vpn_ns ip addr show tun0
tail -f /var/log/openvpn-ns.log

🛣️ Policy-Based Routing

Policy routing (also called policy-based routing or PBR) is the mechanism that makes split-tunneling possible. Instead of routing decisions being based solely on destination addresses, policy routing allows decisions based on source address, packet marks, and other criteria.

The Linux Routing Policy Database (RPDB)

Linux maintains a database of routing rules that are checked in priority order. The first matching rule determines which routing table to use.

# View current routing rules
ip rule list

# Output from this system:
0:      from all lookup local                      # Priority 0 (always first)
100:    from 10.4.0.0/24 to 10.4.0.0/24 lookup main
101:    from all to 10.4.0.0/24 lookup main
102:    from all to 10.0.0.0/16 lookup main
103:    from all to 10.0.1.0/24 lookup main
104:    from all to 10.0.2.0/24 lookup main
200:    from all fwmark 0x200 lookup 200            # VPN client traffic
201:    from all fwmark 0x100 lookup 200            # DNS queries for VPN clients
32766:  from all lookup main
32767:  from all lookup default

Rule Priority Explanation

Priority Rule Purpose Action
0 from all lookup local System-critical local routes Never modify this
100 from 10.4.0.0/24 to 10.4.0.0/24 lookup main WireGuard peer-to-peer traffic Keep all client-to-client traffic local, never route through VPN
101-104 from all to [private CIDRs] lookup main Exclude AWS VPC and local networks from VPN Traffic to these destinations always uses main table (direct)
200 from all fwmark 0x200 lookup 200 VPN client data traffic Route to table 200 (through namespace)
201 from all fwmark 0x100 lookup 200 DNS queries for VPN clients Route to table 200 (through namespace)
32766 from all lookup main Default system routing All unmarked traffic uses main table

Custom Routing Table 200

Table 200 is created specifically for VPN-routed traffic:

# View table 200 routes
ip route show table 200

# Output:
default via 10.200.1.2 dev veth-wg

This simple default route sends all traffic in table 200 to the namespace (10.200.1.2 is the namespace end of the veth pair).

Why Priority Order Matters

⚠️ Order Is Critical

Rules are evaluated from lowest to highest priority number. This means:

  • Priority 100-104 (exclusions) MUST come before priority 200-201 (VPN routing)
  • If a packet matches priority 100, it never reaches priority 200
  • This ensures local traffic is excluded even for VPN clients

Example: A VPN client (10.4.0.200) pinging another WireGuard peer (10.4.0.50)

  • Packet would normally be marked with fwmark 0x200 (VPN client)
  • But priority 100 rule matches first: source 10.4.0.200 (in 10.4.0.0/24) → destination 10.4.0.50 (in 10.4.0.0/24)
  • Uses main table, routes directly via wg0
  • Priority 200 rule never evaluated

🔄 NAT & Masquerading

Network Address Translation (NAT) is used extensively to allow multiple private IP addresses to share public IPs. This system employs NAT at multiple layers.

Layer 1: Host Namespace NAT (Direct Traffic)

For clients in the direct internet range (10.4.0.0-127):

iptables -t nat -A POSTROUTING -s 10.4.0.0/25 -o ens5 -j MASQUERADE

Effect: Translates source IPs from 10.4.0.0-127 to 10.0.1.191 (AWS instance private IP)

Then: AWS VPC performs additional NAT from 10.0.1.191 to the Elastic IP

Layer 2: VPN Namespace NAT (VPN Traffic)

Inside the vpn_ns namespace:

ip netns exec vpn_ns iptables -t nat -A POSTROUTING \
    -m iprange --src-range 10.4.0.128-10.4.0.255 -o tun0 -j MASQUERADE

Effect: Translates source IPs from VPN clients to the OpenVPN-assigned IP on tun0

Then: OpenVPN provider performs NAT from tun0 IP to the VPN exit node's public IP

Layer 3: DNS Query NAT

Special NAT for dnsmasq queries going through the VPN:

iptables -t nat -A POSTROUTING \
    -s 10.0.1.191 -d 1.1.1.2 -p udp --dport 53 \
    -j SNAT --to-source 10.200.1.1

Why Needed: dnsmasq (running on 10.0.1.191) sends DNS queries. When these queries need to go through the namespace, the source must be rewritten to 10.200.1.1 so responses can route back through the veth pair correctly.

Connection Tracking

All NAT operations rely on conntrack (connection tracking):

🔍 Viewing Connection Tracking

# View all tracked connections
conntrack -L

# View connections through VPN namespace
ip netns exec vpn_ns conntrack -L

# Monitor new connections in real-time
conntrack -E

⏱️ System Lifecycle

Startup Sequence (wg0-up.sh)

Phase 1: Core Networking Setup

  1. Enable IP forwarding: sysctl -w net.ipv4.ip_forward=1
  2. Create network namespace: ip netns add vpn_ns
  3. Create veth pair: ip link add veth-wg type veth peer name veth-vpn
  4. Move veth-vpn into namespace: ip link set veth-vpn netns vpn_ns
  5. Configure IP addresses on both veth ends
  6. Bring interfaces up
  7. Set WireGuard MTU to 1420

Phase 2: Namespace Configuration

  1. Enable forwarding inside namespace
  2. Set temporary default route inside namespace (via veth)
  3. Add static route for OpenVPN server IP ()
  4. Add route for WireGuard subnet back through veth
  5. Configure DNS inside namespace (/etc/netns/vpn_ns/resolv.conf)

Phase 3: OpenVPN Connection

  1. Launch OpenVPN client inside namespace
  2. OpenVPN connects to OpenVPN provider server
  3. tun0 interface created inside namespace
  4. OpenVPN sets default route to tun0 (overriding temporary route)
  5. Connection established and verified

Phase 4: NAT Configuration

  1. Set up NAT inside namespace (VPN traffic → tun0)
  2. Set up NAT in host namespace (direct traffic → ens5)
  3. Set up DNS query NAT (dnsmasq → namespace)
  4. Set up MSS clamping (both namespaces)

Phase 5: Policy Routing

  1. Create routing table 200 with default route via namespace
  2. Add priority 100 rule (WireGuard peer-to-peer)
  3. Add priority 101-104 rules (exclude local networks)
  4. Add priority 200 rule (VPN client traffic)
  5. Add priority 201 rule (DNS queries)

Phase 6: Packet Marking & Forwarding

  1. Set up mangle rules to mark VPN client packets (fwmark 0x200)
  2. Set up mangle rules to mark DNS queries (fwmark 0x100)
  3. Configure FORWARD rules for all traffic paths
  4. Allow peer-to-peer communication on wg0
  5. System ready for traffic

Shutdown Sequence (wg0-down.sh)

Teardown happens in reverse order to prevent routing issues:

  1. Stop OpenVPN: Kill process, wait for cleanup
  2. Remove namespace iptables rules: NAT and mangle rules
  3. Remove host iptables rules: Policy routing marks, NAT, forwarding
  4. Remove policy routing rules: Clean RPDB
  5. Flush custom routing table 200
  6. Delete veth pair: Automatically removes both ends
  7. Delete namespace: All namespace-specific configs gone
  8. Clean up DNS config: Remove /etc/netns/vpn_ns

💼 Use Cases & Scenarios

Use Case 1: Privacy-Focused Browsing

Scenario: User wants to browse sensitive content without revealing their real IP

Solution: Assign WireGuard IP from VPN range (10.4.0.128-255)

Result: All browsing traffic exits through OpenVPN provider, DNS queries also anonymized

Use Case 2: Geo-Restricted Content

Scenario: Streaming service only available in certain countries

Solution: Use VPN range IP, select OpenVPN provider server in target country

Result: Service sees connection from VPN country

Use Case 3: Low-Latency Gaming

Scenario: Online gaming requires low latency, VPN adds too much overhead

Solution: Assign IP from direct internet range (10.4.0.0-127)

Result: Game traffic bypasses VPN, lowest possible latency

Use Case 4: Internal Service Access

Scenario: Access company internal services (vault.raff.local)

Solution: Any WireGuard IP works, DNS resolves to 10.4.0.7

Result: Traffic stays within WireGuard network, never reaches internet

Use Case 5: Split Configuration on Single Device

Scenario: User wants some apps through VPN, others direct

Solution: Device connects to WireGuard multiple times with different IPs

Advanced: Use network namespaces on client device, route apps differently

🔧 Configuration Variations

Variation 1: Different Split Ratios

The current 50/50 split can be adjusted:

Variation 2: Multiple VPN Providers

Could create multiple namespaces for different VPN providers:

Variation 3: Protocol-Based Routing

Instead of IP ranges, route based on protocol:

Variation 4: Time-Based Routing

Route through VPN only during certain hours:

🔒 Security Model

Encryption Layers

Layer Protocol Protects Against
Client ↔ WireGuard Server WireGuard (Noise Protocol) ISP snooping, local network attacks
WireGuard Server ↔ OpenVPN provider OpenVPN (TLS) AWS network monitoring, datacenter attacks
OpenVPN provider ↔ Destination TLS/HTTPS (application layer) VPN provider snooping, exit node attacks

Attack Surface Analysis

✅ Protected Scenarios

⚠️ Potential Vulnerabilities

Firewall Rules

🛡️ Default Deny Posture

This system uses an "allow what's explicitly permitted" approach:

  • No INPUT rules shown - by default only allows established connections
  • FORWARD chain has explicit rules for each permitted path
  • Anything not explicitly allowed is dropped

Add INPUT rules for production to restrict SSH, WireGuard port access

Principle of Least Privilege

📚 Summary

This WireGuard split-tunnel VPN system represents a sophisticated approach to selective routing, combining modern VPN protocols with Linux networking primitives. By understanding the architecture, data flow, and component interactions documented here, you have the conceptual foundation to deploy, modify, and troubleshoot this system in any environment.

The key insight is that IP address allocation is destiny - simply by assigning a client an IP in one half of the subnet versus the other, you completely change how their traffic traverses the internet, all transparently and without client-side configuration.