📑 Table of Contents
🎯 System Overview
This WireGuard VPN system implements a sophisticated split-tunnel architecture that intelligently routes different clients through different network paths based on their IP addresses. The system provides granular control over which traffic flows through a commercial VPN (OpenVPN provider) and which traffic goes directly to the internet, all while maintaining a unified WireGuard VPN interface for clients.
🎓 Key Concept: Split-Tunnel VPN
Unlike traditional "all-or-nothing" VPN setups where all traffic either goes through the VPN or doesn't, this system creates selective routing where different clients connected to the same WireGuard server can have their traffic routed through completely different paths based solely on their IP address assignment. This allows for optimized performance, cost control, and flexible privacy policies within a single infrastructure.
Core Design Principles
- IP-Based Traffic Separation: Traffic routing decisions are made based on source IP addresses, dividing the 10.4.0.0/24 subnet into two distinct groups
- Network Namespace Isolation: VPN-routed traffic is processed in an isolated Linux network namespace to prevent routing conflicts
- Transparent Routing: Clients require no special configuration beyond their IP assignment - routing happens automatically on the server
- Local Traffic Optimization: All local/private network traffic bypasses the VPN regardless of client type
- DNS Privacy: Custom DNS resolution with split-horizon DNS for internal services
🏗️ High-Level Architecture
📊 System Architecture Diagram
🧩 Core Components
WireGuard Interface (wg0)
Purpose: Provides encrypted VPN tunnel endpoint for client connections
IP: 10.4.0.1/24
Port: UDP 45822
Key Features:
- Modern cryptography (Noise protocol)
- Low latency overhead
- No routing table (Table = off)
- MTU: 1420 bytes
dnsmasq DNS Server
Purpose: DNS resolution with custom local domains and upstream forwarding
Listens On:
- 127.0.0.1 (localhost)
- 10.4.0.1 (WireGuard)
- 10.200.1.1 (veth bridge)
Upstream DNS: Cloudflare (1.1.1.2, 1.0.0.2) and Google (8.8.8.8, 8.8.4.4)
Custom Records:
- vault.raff.local → 10.4.0.7
- rafflab.internal → 10.4.0.3
Network Namespace (vpn_ns)
Purpose: Isolated network environment for VPN-routed traffic
Why Needed: Prevents routing conflicts between direct and VPN paths
Contains:
- OpenVPN client process
- tun0 interface (VPN tunnel)
- Separate routing table
- Independent iptables rules
Virtual Ethernet Pair (veth)
Purpose: Connects host namespace to VPN namespace
Host End: veth-wg (10.200.1.1/24)
Namespace End: veth-vpn (10.200.1.2/24)
Function: Acts as a virtual network cable between namespaces
OpenVPN Client
Purpose: Connects to commercial VPN provider (OpenVPN provider)
Runs Inside: vpn_ns namespace
Server: il66.OpenVPN provider.com (
Protocol: UDP
Interface: tun0 (dynamically assigned IP)
iptables / netfilter
Purpose: Packet filtering, NAT, and marking
Tables Used:
- nat: Address translation
- mangle: Packet marking for routing
- filter: Forwarding rules
Policy Routing Engine
Purpose: Routes packets to different routing tables based on marks and source
Custom Table: Table 200
Uses: fwmark-based routing decisions
Physical Network Interface (ens5)
Purpose: AWS EC2 instance's primary network interface
IP: 10.0.1.191 (private AWS VPC IP)
Function: Gateway to internet for direct-routed traffic
🌐 IP Address Allocation Strategy
💡 The Split Allocation Model
The system divides the 10.4.0.0/24 subnet into two equal halves. This binary split is the fundamental decision point for all routing logic in the system. Your IP address determines your entire network path.
| IP Range | Subnet | Count | Routing Path | Use Case |
|---|---|---|---|---|
| 10.4.0.0 - 10.4.0.127 | 10.4.0.0/25 | 128 IPs | Direct Internet | Low-latency applications, streaming, gaming, trusted services |
| 10.4.0.128 - 10.4.0.255 | 10.4.0.128/25 | 128 IPs | Through OpenVPN provider | Privacy-sensitive browsing, geo-restricted content, untrusted networks |
Special Reserved Addresses
| IP Address | Purpose | Component |
|---|---|---|
| 10.4.0.1 | WireGuard Server / DNS Server | wg0 interface, dnsmasq |
| 10.4.0.3 | Internal Service (rafflab.internal) | Custom DNS record |
| 10.4.0.7 | Internal Service (vault.raff.local) | Custom DNS record |
| 10.200.1.1 | Veth Host Side | Bridge to namespace |
| 10.200.1.2 | Veth Namespace Side | Inside vpn_ns |
| 10.0.1.191 | AWS EC2 Instance IP | ens5 interface (private VPC) |
🔄 Data Flow & Packet Routing
Understanding how packets flow through this system is crucial to grasping its architecture. The routing decision happens at the server based on the packet's source IP, and clients are completely unaware of which path their traffic takes.
Scenario 1: Direct Internet Path (Client with IP 10.4.0.50)
Step 1: Packet Arrival
Client (10.4.0.50) sends HTTP request to google.com (172.217.164.46)
Source: 10.4.0.50:54321 → Destination: 172.217.164.46:443
Step 2: WireGuard Decryption
Packet arrives on wg0 interface, WireGuard decrypts it
Packet enters host network namespace with original source IP intact
Step 3: Routing Decision
Source IP 10.4.0.50 is in range 10.4.0.0-127 (non-VPN range)
iptables does not mark this packet (no fwmark 0x200)
Policy routing rules are not triggered
Step 4: Direct Forwarding
Packet matches iptables FORWARD rule:
iptables -A FORWARD -i wg0 -s 10.4.0.0/25 -o ens5 -j ACCEPT
Packet is forwarded from wg0 to ens5 (AWS network interface)
Step 5: NAT Translation
Packet hits NAT POSTROUTING rule:
iptables -t nat -A POSTROUTING -s 10.4.0.0/25 -o ens5 -j MASQUERADE
Source IP changed: 10.4.0.50 → 10.0.1.191 (AWS instance IP)
Connection tracking records translation for return packets
Step 6: Internet Egress
Packet exits via ens5 to AWS infrastructure
AWS performs additional NAT: 10.0.1.191 → Public Elastic IP
Packet reaches google.com directly from AWS data center
Step 7: Return Path
Response from google.com arrives at AWS Elastic IP
AWS NAT translates back to 10.0.1.191
iptables connection tracking reverses MASQUERADE: 10.0.1.191 → 10.4.0.50
Packet forwarded to wg0 interface
WireGuard encrypts and sends to client
📈 Direct Internet Flow Diagram
Scenario 2: VPN-Routed Path (Client with IP 10.4.0.200)
Step 1: Packet Arrival
Client (10.4.0.200) sends HTTP request to google.com (172.217.164.46)
Source: 10.4.0.200:54321 → Destination: 172.217.164.46:443
Step 2: WireGuard Decryption
Packet arrives on wg0 interface, WireGuard decrypts it
Packet enters host network namespace
Step 3: Packet Marking
Source IP 10.4.0.200 is in range 10.4.0.128-255 (VPN range)
iptables mangle table marks packet:
iptables -t mangle -A PREROUTING -i wg0 -m iprange --src-range 10.4.0.128-10.4.0.255 -j MARK --set-mark 0x200
Packet now has fwmark 0x200 (512 in decimal)
Step 4: Policy Routing Lookup
Kernel checks routing policy database (RPDB):
ip rule add fwmark 0x200 table 200 priority 200
Packet with fwmark 0x200 must use routing table 200
Table 200 has default route:
ip route add default via 10.200.1.2 dev veth-wg table 200
Decision: Send packet to 10.200.1.2 via veth-wg
Step 5: Namespace Transition
Packet travels through veth-wg (host end) to veth-vpn (namespace end)
Packet crosses into vpn_ns network namespace
Now operating in isolated routing environment
Step 6: Namespace NAT
Inside vpn_ns, packet hits NAT rule:
iptables -t nat -A POSTROUTING -m iprange --src-range 10.4.0.128-10.4.0.255 -o tun0 -j MASQUERADE
Source IP changed: 10.4.0.200 → tun0 IP (VPN-assigned IP)
Step 7: OpenVPN Encryption
Packet sent to tun0 interface
OpenVPN client encrypts packet
Outer packet: Source = 10.200.1.2, Destination =
Step 8: Return to Host Namespace
Encrypted OpenVPN packet exits vpn_ns via veth-vpn
Arrives in host namespace at veth-wg
Routing decision: destination
Step 9: Internet Egress via VPN
Packet exits via ens5 to AWS network
AWS routes to OpenVPN provider server (
OpenVPN provider decrypts, sees original request to google.com
OpenVPN provider forwards to google.com from VPN exit node
Step 10: Return Path
Response from google.com → OpenVPN provider exit node
OpenVPN provider encrypts, sends to OpenVPN client
Packet arrives at ens5, forwarded to veth-wg
Enters vpn_ns, OpenVPN decrypts to tun0
NAT reverse translation: tun0 IP → 10.4.0.200
Exits namespace via veth-vpn to veth-wg
Policy routing returns to wg0
WireGuard encrypts and sends to client
📈 VPN-Routed Flow Diagram
Scenario 3: Internal Traffic (Client-to-Client)
Complete Local Path
When Client A (10.4.0.50) communicates with Client B (10.4.0.200):
- Packet arrives on wg0 from Client A
- Destination 10.4.0.200 matches policy rule:
ip rule add from 10.4.0.0/24 to 10.4.0.0/24 table main priority 100
iptables -A FORWARD -i wg0 -o wg0 -s 10.4.0.0/24 -d 10.4.0.0/24 -j ACCEPT
🌍 DNS Resolution Architecture
DNS resolution in this system is complex because it must handle three distinct scenarios: local domain resolution, VPN client DNS privacy, and the server's own DNS queries that need to traverse the VPN.
DNS Server Configuration (dnsmasq)
The dnsmasq server provides:
- Split-horizon DNS: Different responses for internal domains
- Multi-interface listening: Serves clients on wg0, namespace on veth-wg, and localhost
- Custom domain resolution: Local overrides for internal services
- Upstream forwarding: Queries for external domains forwarded to public DNS
DNS Resolution Paths
Path 1: Non-VPN Client DNS Query (Client 10.4.0.50)
Query: Client 10.4.0.50 queries "google.com"
Step 1: DNS query sent to 10.4.0.1:53 (dnsmasq)
Step 2: dnsmasq checks local records (not found)
Step 3: dnsmasq forwards to upstream: 1.1.1.2 or 8.8.8.8
Step 4: DNS query exits via ens5 directly (source: 10.0.1.191)
Step 5: Response returns, dnsmasq caches and replies to client
Privacy Level: Cloudflare/Google sees query from AWS IP (10.0.1.191)
Path 2: VPN Client DNS Query (Client 10.4.0.200)
Query: Client 10.4.0.200 queries "google.com"
Step 1: DNS query sent to 10.4.0.1:53 (dnsmasq)
Step 2: dnsmasq checks local records (not found)
Step 3: dnsmasq creates upstream query to 1.1.1.2:53
Step 4: Query packet marked by iptables:
iptables -t mangle -A OUTPUT -p udp --dport 53 -d 1.1.1.2 -j MARK --set-mark 0x100
Step 5: Marked packet routed via table 200 (through namespace)
Step 6: Packet enters vpn_ns via veth
Step 7: NAT applied inside namespace:
iptables -t nat -A POSTROUTING -s 10.0.1.191 -d 1.1.1.2 -p udp --dport 53 -j SNAT --to-source 10.200.1.1
Step 8: Packet exits tun0 through OpenVPN to OpenVPN provider
Step 9: OpenVPN provider forwards to 1.1.1.2, receives response
Step 10: Response returns through VPN tunnel, NAT reversed
Step 11: dnsmasq receives response, replies to client 10.4.0.200
Privacy Level: Cloudflare sees query from OpenVPN provider exit IP (true privacy)
Path 3: Local Domain Query (Any Client)
Query: Any client queries "vault.raff.local"
Step 1: DNS query sent to 10.4.0.1:53
Step 2: dnsmasq checks local records (FOUND)
Step 3: dnsmasq immediately responds with 10.4.0.7
No upstream query needed
Client routes to 10.4.0.7 as internal traffic (never leaves WireGuard)
🌐 DNS Resolution Flow Comparison
Why DNS Queries Are Routed Through VPN
⚠️ DNS Leak Prevention
The Problem: Even if your HTTP traffic goes through a VPN, DNS queries can "leak" outside the VPN if not carefully handled. This means your DNS provider (like Cloudflare or Google) can see what websites you're visiting, defeating much of the VPN's privacy benefit.
The Solution: This system marks DNS queries from dnsmasq destined for upstream servers with fwmark 0x100, forcing them to route through the VPN namespace and exit via OpenVPN provider. This ensures that VPN clients' DNS resolution is also anonymized.
Technical Detail: The DNS query marking is very specific - only packets going TO port 53 (DNS queries) are marked, not responses coming FROM port 53. This prevents routing loops.
📦 Network Namespace Architecture
Network namespaces are a Linux kernel feature that provides complete network stack isolation. This system uses a namespace called vpn_ns to create a completely separate networking environment for VPN-routed traffic.
Why Namespaces Are Essential
🔧 The Routing Conflict Problem
Without namespaces, you face an impossible routing conflict:
- The OpenVPN client needs a default route pointing to the VPN (tun0)
- But direct-internet traffic needs a default route pointing to ens5
- Linux can only have ONE default route per routing table
Solution: Run OpenVPN in a namespace with its own routing table. The namespace has a default route to tun0, while the host keeps its default route to ens5. Traffic is selectively sent into the namespace via policy routing.
Namespace Components
| Component | Inside Namespace? | Purpose |
|---|---|---|
| OpenVPN Process | ✅ Yes | Runs entirely within vpn_ns, only sees namespace network |
| tun0 Interface | ✅ Yes | Created by OpenVPN inside namespace |
| veth-vpn (namespace end) | ✅ Yes | Connection point to host namespace |
| Namespace Routing Table | ✅ Yes | Default route: via tun0 |
| Namespace iptables | ✅ Yes | Separate NAT and forwarding rules |
| veth-wg (host end) | ❌ No | Host namespace bridge to vpn_ns |
| WireGuard (wg0) | ❌ No | Remains in host namespace |
| ens5 | ❌ No | Physical NIC stays in host |
Namespace Lifecycle
Creation (wg0-up.sh)
- Create namespace:
ip netns add vpn_ns - Create veth pair:
ip link add veth-wg type veth peer name veth-vpn - Move one end into namespace:
ip link set veth-vpn netns vpn_ns - Configure IPs on both ends
- Start OpenVPN inside namespace:
ip netns exec vpn_ns openvpn ... - Configure routes inside namespace
- Configure iptables inside namespace
Teardown (wg0-down.sh)
- Stop OpenVPN process (kill PID)
- Remove iptables rules inside namespace
- Delete veth pair (automatically removes both ends)
- Delete namespace:
ip netns del vpn_ns - Clean up DNS config:
rm -rf /etc/netns/vpn_ns
Viewing Namespace State
🔍 Useful Commands for Namespace Inspection
# List all namespaces ip netns list # Execute command in namespace ip netns exec vpn_ns# View namespace interfaces ip netns exec vpn_ns ip addr show # View namespace routing table ip netns exec vpn_ns ip route show # View namespace iptables ip netns exec vpn_ns iptables -L -n -v ip netns exec vpn_ns iptables -t nat -L -n -v # Check OpenVPN connection ip netns exec vpn_ns ip addr show tun0 tail -f /var/log/openvpn-ns.log
🛣️ Policy-Based Routing
Policy routing (also called policy-based routing or PBR) is the mechanism that makes split-tunneling possible. Instead of routing decisions being based solely on destination addresses, policy routing allows decisions based on source address, packet marks, and other criteria.
The Linux Routing Policy Database (RPDB)
Linux maintains a database of routing rules that are checked in priority order. The first matching rule determines which routing table to use.
# View current routing rules ip rule list # Output from this system: 0: from all lookup local # Priority 0 (always first) 100: from 10.4.0.0/24 to 10.4.0.0/24 lookup main 101: from all to 10.4.0.0/24 lookup main 102: from all to 10.0.0.0/16 lookup main 103: from all to 10.0.1.0/24 lookup main 104: from all to 10.0.2.0/24 lookup main 200: from all fwmark 0x200 lookup 200 # VPN client traffic 201: from all fwmark 0x100 lookup 200 # DNS queries for VPN clients 32766: from all lookup main 32767: from all lookup default
Rule Priority Explanation
| Priority | Rule | Purpose | Action |
|---|---|---|---|
| 0 | from all lookup local | System-critical local routes | Never modify this |
| 100 | from 10.4.0.0/24 to 10.4.0.0/24 lookup main | WireGuard peer-to-peer traffic | Keep all client-to-client traffic local, never route through VPN |
| 101-104 | from all to [private CIDRs] lookup main | Exclude AWS VPC and local networks from VPN | Traffic to these destinations always uses main table (direct) |
| 200 | from all fwmark 0x200 lookup 200 | VPN client data traffic | Route to table 200 (through namespace) |
| 201 | from all fwmark 0x100 lookup 200 | DNS queries for VPN clients | Route to table 200 (through namespace) |
| 32766 | from all lookup main | Default system routing | All unmarked traffic uses main table |
Custom Routing Table 200
Table 200 is created specifically for VPN-routed traffic:
# View table 200 routes ip route show table 200 # Output: default via 10.200.1.2 dev veth-wg
This simple default route sends all traffic in table 200 to the namespace (10.200.1.2 is the namespace end of the veth pair).
Why Priority Order Matters
⚠️ Order Is Critical
Rules are evaluated from lowest to highest priority number. This means:
- Priority 100-104 (exclusions) MUST come before priority 200-201 (VPN routing)
- If a packet matches priority 100, it never reaches priority 200
- This ensures local traffic is excluded even for VPN clients
Example: A VPN client (10.4.0.200) pinging another WireGuard peer (10.4.0.50)
- Packet would normally be marked with fwmark 0x200 (VPN client)
- But priority 100 rule matches first: source 10.4.0.200 (in 10.4.0.0/24) → destination 10.4.0.50 (in 10.4.0.0/24)
- Uses main table, routes directly via wg0
- Priority 200 rule never evaluated
🔄 NAT & Masquerading
Network Address Translation (NAT) is used extensively to allow multiple private IP addresses to share public IPs. This system employs NAT at multiple layers.
Layer 1: Host Namespace NAT (Direct Traffic)
For clients in the direct internet range (10.4.0.0-127):
iptables -t nat -A POSTROUTING -s 10.4.0.0/25 -o ens5 -j MASQUERADE
Effect: Translates source IPs from 10.4.0.0-127 to 10.0.1.191 (AWS instance private IP)
Then: AWS VPC performs additional NAT from 10.0.1.191 to the Elastic IP
Layer 2: VPN Namespace NAT (VPN Traffic)
Inside the vpn_ns namespace:
ip netns exec vpn_ns iptables -t nat -A POSTROUTING \
-m iprange --src-range 10.4.0.128-10.4.0.255 -o tun0 -j MASQUERADE
Effect: Translates source IPs from VPN clients to the OpenVPN-assigned IP on tun0
Then: OpenVPN provider performs NAT from tun0 IP to the VPN exit node's public IP
Layer 3: DNS Query NAT
Special NAT for dnsmasq queries going through the VPN:
iptables -t nat -A POSTROUTING \
-s 10.0.1.191 -d 1.1.1.2 -p udp --dport 53 \
-j SNAT --to-source 10.200.1.1
Why Needed: dnsmasq (running on 10.0.1.191) sends DNS queries. When these queries need to go through the namespace, the source must be rewritten to 10.200.1.1 so responses can route back through the veth pair correctly.
Connection Tracking
All NAT operations rely on conntrack (connection tracking):
- Kernel maintains a table of all active connections
- For each outbound NATed packet, records the translation
- For inbound response packets, reverses the translation automatically
- Tracks connection state: NEW, ESTABLISHED, RELATED
🔍 Viewing Connection Tracking
# View all tracked connections conntrack -L # View connections through VPN namespace ip netns exec vpn_ns conntrack -L # Monitor new connections in real-time conntrack -E
⏱️ System Lifecycle
Startup Sequence (wg0-up.sh)
Phase 1: Core Networking Setup
- Enable IP forwarding:
sysctl -w net.ipv4.ip_forward=1 - Create network namespace:
ip netns add vpn_ns - Create veth pair:
ip link add veth-wg type veth peer name veth-vpn - Move veth-vpn into namespace:
ip link set veth-vpn netns vpn_ns - Configure IP addresses on both veth ends
- Bring interfaces up
- Set WireGuard MTU to 1420
Phase 2: Namespace Configuration
- Enable forwarding inside namespace
- Set temporary default route inside namespace (via veth)
- Add static route for OpenVPN server IP (
) - Add route for WireGuard subnet back through veth
- Configure DNS inside namespace (
/etc/netns/vpn_ns/resolv.conf)
Phase 3: OpenVPN Connection
- Launch OpenVPN client inside namespace
- OpenVPN connects to OpenVPN provider server
- tun0 interface created inside namespace
- OpenVPN sets default route to tun0 (overriding temporary route)
- Connection established and verified
Phase 4: NAT Configuration
- Set up NAT inside namespace (VPN traffic → tun0)
- Set up NAT in host namespace (direct traffic → ens5)
- Set up DNS query NAT (dnsmasq → namespace)
- Set up MSS clamping (both namespaces)
Phase 5: Policy Routing
- Create routing table 200 with default route via namespace
- Add priority 100 rule (WireGuard peer-to-peer)
- Add priority 101-104 rules (exclude local networks)
- Add priority 200 rule (VPN client traffic)
- Add priority 201 rule (DNS queries)
Phase 6: Packet Marking & Forwarding
- Set up mangle rules to mark VPN client packets (fwmark 0x200)
- Set up mangle rules to mark DNS queries (fwmark 0x100)
- Configure FORWARD rules for all traffic paths
- Allow peer-to-peer communication on wg0
- System ready for traffic
Shutdown Sequence (wg0-down.sh)
Teardown happens in reverse order to prevent routing issues:
- Stop OpenVPN: Kill process, wait for cleanup
- Remove namespace iptables rules: NAT and mangle rules
- Remove host iptables rules: Policy routing marks, NAT, forwarding
- Remove policy routing rules: Clean RPDB
- Flush custom routing table 200
- Delete veth pair: Automatically removes both ends
- Delete namespace: All namespace-specific configs gone
- Clean up DNS config: Remove /etc/netns/vpn_ns
💼 Use Cases & Scenarios
Use Case 1: Privacy-Focused Browsing
Scenario: User wants to browse sensitive content without revealing their real IP
Solution: Assign WireGuard IP from VPN range (10.4.0.128-255)
Result: All browsing traffic exits through OpenVPN provider, DNS queries also anonymized
Use Case 2: Geo-Restricted Content
Scenario: Streaming service only available in certain countries
Solution: Use VPN range IP, select OpenVPN provider server in target country
Result: Service sees connection from VPN country
Use Case 3: Low-Latency Gaming
Scenario: Online gaming requires low latency, VPN adds too much overhead
Solution: Assign IP from direct internet range (10.4.0.0-127)
Result: Game traffic bypasses VPN, lowest possible latency
Use Case 4: Internal Service Access
Scenario: Access company internal services (vault.raff.local)
Solution: Any WireGuard IP works, DNS resolves to 10.4.0.7
Result: Traffic stays within WireGuard network, never reaches internet
Use Case 5: Split Configuration on Single Device
Scenario: User wants some apps through VPN, others direct
Solution: Device connects to WireGuard multiple times with different IPs
Advanced: Use network namespaces on client device, route apps differently
🔧 Configuration Variations
Variation 1: Different Split Ratios
The current 50/50 split can be adjusted:
- 75% VPN, 25% Direct: Change VPN_RANGE to 10.4.0.64-10.4.0.255
- 25% VPN, 75% Direct: Change VPN_RANGE to 10.4.0.192-10.4.0.255
- Update NON_VPN_SUBNET accordingly
Variation 2: Multiple VPN Providers
Could create multiple namespaces for different VPN providers:
- vpn_ns_nord: 10.4.0.128-191 → OpenVPN provider
- vpn_ns_express: 10.4.0.192-255 → ExpressVPN
- Create separate veth pairs and routing tables for each
Variation 3: Protocol-Based Routing
Instead of IP ranges, route based on protocol:
- HTTP/HTTPS through VPN: Mark TCP port 80/443
- Everything else direct
- Requires more complex iptables rules
Variation 4: Time-Based Routing
Route through VPN only during certain hours:
- Use iptables time matching:
-m time --timestart 09:00 --timestop 17:00 - Conditionally apply packet marking
🔒 Security Model
Encryption Layers
| Layer | Protocol | Protects Against |
|---|---|---|
| Client ↔ WireGuard Server | WireGuard (Noise Protocol) | ISP snooping, local network attacks |
| WireGuard Server ↔ OpenVPN provider | OpenVPN (TLS) | AWS network monitoring, datacenter attacks |
| OpenVPN provider ↔ Destination | TLS/HTTPS (application layer) | VPN provider snooping, exit node attacks |
Attack Surface Analysis
✅ Protected Scenarios
- ISP Surveillance: WireGuard encryption prevents ISP from seeing any client traffic
- Local Network Attacks: All traffic encrypted before leaving client device
- DNS Leaks: DNS queries for VPN clients route through VPN tunnel
- IP Address Exposure: VPN clients appear to come from OpenVPN provider exit nodes
⚠️ Potential Vulnerabilities
- AWS Can See Metadata: AWS can see connection to OpenVPN provider but not content (encrypted)
- OpenVPN provider Can See Traffic: VPN provider can see decrypted traffic (choose trusted provider)
- Direct-Internet Clients: These clients' traffic visible to AWS and ISP in metadata
- Correlation Attacks: Sophisticated attacker monitoring both ends could correlate timing
Firewall Rules
🛡️ Default Deny Posture
This system uses an "allow what's explicitly permitted" approach:
- No INPUT rules shown - by default only allows established connections
- FORWARD chain has explicit rules for each permitted path
- Anything not explicitly allowed is dropped
Add INPUT rules for production to restrict SSH, WireGuard port access
Principle of Least Privilege
- Namespace Isolation: OpenVPN runs in isolated namespace, can't access host network directly
- Separate Routing Tables: Each traffic path has its own rules, no cross-contamination
- Specific NAT Rules: Only necessary address translations permitted
- Granular Marking: Packets marked only for specific IP ranges
📚 Summary
This WireGuard split-tunnel VPN system represents a sophisticated approach to selective routing, combining modern VPN protocols with Linux networking primitives. By understanding the architecture, data flow, and component interactions documented here, you have the conceptual foundation to deploy, modify, and troubleshoot this system in any environment.
The key insight is that IP address allocation is destiny - simply by assigning a client an IP in one half of the subnet versus the other, you completely change how their traffic traverses the internet, all transparently and without client-side configuration.