Network Routing Optimization

Minimizing Latency and Packet Loss with Python

Network routing optimization is a fundamental challenge in computer networking. The goal is to find paths through a network that minimize latency and packet loss — two metrics that directly impact user experience. In this post, we’ll model a real-world-like network, apply classic and modern optimization algorithms, and visualize the results in both 2D and 3D.


The Problem

Consider a network of 10 nodes (routers/switches) with weighted edges representing two cost dimensions:

  • Latency (ms): propagation delay between nodes
  • Packet Loss (%)**: probability of a packet being dropped on a link

We want to find the optimal path from a source to a destination that minimizes a composite cost:

$$C(e) = \alpha \cdot L(e) + \beta \cdot P(e)$$

Where:

  • $L(e)$ = latency of edge $e$ in ms
  • $P(e)$ = packet loss rate of edge $e$ (normalized 0–1)
  • $\alpha, \beta$ = weighting factors (e.g., $\alpha = 0.7$, $\beta = 0.3$)

For end-to-end packet delivery probability across a path $\pi = (e_1, e_2, \ldots, e_k)$:

$$P_{\text{delivery}}(\pi) = \prod_{i=1}^{k}(1 - p_i)$$

To use additive shortest-path algorithms on multiplicative metrics, we transform packet loss using the log-sum trick:

$$-\log P_{\text{delivery}}(\pi) = \sum_{i=1}^{k} -\log(1 - p_i)$$

We then find the path minimizing:

$$\text{Cost}(\pi) = \alpha \sum_{i} L(e_i) + \beta \sum_{i} \left(-\log(1 - p_i)\right) \cdot S$$

Where $S$ is a scaling factor to keep units comparable.


Algorithms Used

  1. Dijkstra’s Algorithm — finds shortest path using composite cost
  2. Bellman-Ford — handles negative weights (used here for comparison)
  3. Genetic Algorithm (GA) — metaheuristic for global optimization across all paths
  4. Simulated Annealing (SA) — probabilistic optimization to escape local optima

Full Source Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import random
import math
import warnings
warnings.filterwarnings('ignore')

# ── Reproducibility ──────────────────────────────────────────────────────────
np.random.seed(42)
random.seed(42)

# ══════════════════════════════════════════════════════════════════════════════
# 1. Network Construction
# ══════════════════════════════════════════════════════════════════════════════
def build_network():
"""Build a weighted directed graph representing a 10-node network."""
G = nx.DiGraph()
nodes = list(range(10))
G.add_nodes_from(nodes)

# Fixed edges: (src, dst, latency_ms, packet_loss_pct)
edges = [
(0, 1, 10, 0.01), (0, 2, 25, 0.02), (0, 3, 15, 0.005),
(1, 4, 20, 0.03), (1, 5, 30, 0.01),
(2, 4, 12, 0.02), (2, 6, 18, 0.04),
(3, 5, 22, 0.015),(3, 7, 35, 0.025),
(4, 8, 14, 0.01), (4, 6, 10, 0.005),
(5, 8, 16, 0.02), (5, 9, 28, 0.03),
(6, 9, 20, 0.01),
(7, 8, 12, 0.005),(7, 9, 18, 0.02),
(8, 9, 10, 0.01),
# Reverse / alternate paths
(1, 0, 10, 0.01), (2, 0, 25, 0.02), (3, 0, 15, 0.005),
(4, 1, 20, 0.03), (5, 1, 30, 0.01),
(4, 2, 12, 0.02), (6, 2, 18, 0.04),
(5, 3, 22, 0.015),(7, 3, 35, 0.025),
(8, 4, 14, 0.01), (6, 4, 10, 0.005),
(8, 5, 16, 0.02), (9, 5, 28, 0.03),
(9, 6, 20, 0.01),
(8, 7, 12, 0.005),(9, 7, 18, 0.02),
(9, 8, 10, 0.01),
]

alpha, beta, scale = 0.7, 0.3, 100.0

for src, dst, lat, loss in edges:
log_loss = -math.log(1 - loss) * scale
composite = alpha * lat + beta * log_loss
G.add_edge(src, dst,
latency=lat,
packet_loss=loss,
log_loss=log_loss,
composite=composite)
return G

G = build_network()
SOURCE, TARGET = 0, 9

# ══════════════════════════════════════════════════════════════════════════════
# 2. Dijkstra (via NetworkX)
# ══════════════════════════════════════════════════════════════════════════════
def dijkstra_path(G, src, dst):
path = nx.dijkstra_path(G, src, dst, weight='composite')
cost = nx.dijkstra_path_length(G, src, dst, weight='composite')
return path, cost

dijkstra_result, dijkstra_cost = dijkstra_path(G, SOURCE, TARGET)

# ══════════════════════════════════════════════════════════════════════════════
# 3. Bellman-Ford
# ══════════════════════════════════════════════════════════════════════════════
def bellman_ford_path(G, src, dst):
length, path_dict = nx.single_source_bellman_ford(G, src, weight='composite')
return path_dict[dst], length[dst]

bf_result, bf_cost = bellman_ford_path(G, SOURCE, TARGET)

# ══════════════════════════════════════════════════════════════════════════════
# 4. Helper: path metrics
# ══════════════════════════════════════════════════════════════════════════════
def path_metrics(G, path):
latency = sum(G[u][v]['latency'] for u, v in zip(path, path[1:]))
loss_sum = sum(G[u][v]['packet_loss'] for u, v in zip(path, path[1:]))
delivery = math.prod(1 - G[u][v]['packet_loss'] for u, v in zip(path, path[1:]))
composite= sum(G[u][v]['composite'] for u, v in zip(path, path[1:]))
return latency, loss_sum, delivery, composite

# ══════════════════════════════════════════════════════════════════════════════
# 5. Genetic Algorithm
# ══════════════════════════════════════════════════════════════════════════════
def find_all_simple_paths(G, src, dst, cutoff=8):
return list(nx.all_simple_paths(G, src, dst, cutoff=cutoff))

all_paths = find_all_simple_paths(G, SOURCE, TARGET)

def ga_fitness(path):
try:
return path_metrics(G, path)[3] # composite cost
except Exception:
return float('inf')

def genetic_algorithm(paths, pop_size=30, generations=80, mutation_rate=0.2):
if len(paths) == 0:
return None, float('inf')
pop_size = min(pop_size, len(paths))
population = random.sample(paths, pop_size)
best_path, best_cost = min(((p, ga_fitness(p)) for p in population), key=lambda x: x[1])

history = [best_cost]
for _ in range(generations):
scored = sorted(population, key=ga_fitness)
elite = scored[:max(2, pop_size // 4)]
new_pop = list(elite)

while len(new_pop) < pop_size:
parent = random.choice(elite)
if random.random() < mutation_rate and len(paths) > 1:
child = random.choice(paths)
else:
child = parent
new_pop.append(child)

population = new_pop
gen_best_path, gen_best_cost = min(((p, ga_fitness(p)) for p in population), key=lambda x: x[1])
if gen_best_cost < best_cost:
best_cost, best_path = gen_best_cost, gen_best_path
history.append(best_cost)

return best_path, best_cost, history

ga_result, ga_cost, ga_history = genetic_algorithm(all_paths)

# ══════════════════════════════════════════════════════════════════════════════
# 6. Simulated Annealing
# ══════════════════════════════════════════════════════════════════════════════
def simulated_annealing(paths, T_init=200.0, T_min=0.1, cooling=0.95, iterations=500):
if not paths:
return None, float('inf'), []
current = random.choice(paths)
current_cost = ga_fitness(current)
best, best_cost = current, current_cost
T = T_init
history = [current_cost]

for _ in range(iterations):
candidate = random.choice(paths)
cand_cost = ga_fitness(candidate)
delta = cand_cost - current_cost
if delta < 0 or random.random() < math.exp(-delta / T):
current, current_cost = candidate, cand_cost
if current_cost < best_cost:
best, best_cost = current, current_cost
T = max(T * cooling, T_min)
history.append(best_cost)

return best, best_cost, history

sa_result, sa_cost, sa_history = simulated_annealing(all_paths)

# ══════════════════════════════════════════════════════════════════════════════
# 7. Print Summary
# ══════════════════════════════════════════════════════════════════════════════
algorithms = {
'Dijkstra': (dijkstra_result, dijkstra_cost),
'Bellman-Ford': (bf_result, bf_cost),
'Genetic Algorithm': (ga_result, ga_cost),
'Simulated Annealing':(sa_result, sa_cost),
}

print("=" * 65)
print(f"{'Algorithm':<22} {'Path':<28} {'Composite':>10}")
print("=" * 65)
for name, (path, cost) in algorithms.items():
path_str = " → ".join(str(n) for n in path) if path else "N/A"
print(f"{name:<22} {path_str:<28} {cost:>10.4f}")
print("=" * 65)

print("\n── Detailed Metrics ──")
print(f"{'Algorithm':<22} {'Latency(ms)':>12} {'Loss(sum)':>10} {'Delivery%':>10}")
print("-" * 58)
for name, (path, _) in algorithms.items():
if path:
lat, ls, deliv, _ = path_metrics(G, path)
print(f"{name:<22} {lat:>12.1f} {ls:>10.4f} {deliv*100:>9.2f}%")

# ══════════════════════════════════════════════════════════════════════════════
# 8. Visualization
# ══════════════════════════════════════════════════════════════════════════════
pos = nx.spring_layout(G, seed=7, k=1.8)

COLORS = {
'Dijkstra': '#E74C3C',
'Bellman-Ford': '#3498DB',
'Genetic Algorithm': '#2ECC71',
'Simulated Annealing': '#F39C12',
}
PATH_STYLES = {
'Dijkstra': {'width': 4, 'style': 'solid'},
'Bellman-Ford': {'width': 3, 'style': 'dashed'},
'Genetic Algorithm': {'width': 2.5, 'style': 'dotted'},
'Simulated Annealing': {'width': 2, 'style': 'dashdot'},
}

fig = plt.figure(figsize=(22, 18))
fig.patch.set_facecolor('#0D1117')

# ── Panel 1: Network topology with all algorithm paths ───────────────────────
ax1 = fig.add_subplot(2, 3, 1)
ax1.set_facecolor('#161B22')
ax1.set_title('Network Topology & Optimal Paths', color='white', fontsize=13, pad=10)

edge_labels = {(u, v): f"{d['latency']}ms\n{d['packet_loss']*100:.1f}%"
for u, v, d in G.edges(data=True) if u < v}

nx.draw_networkx_nodes(G, pos, ax=ax1, node_color='#58A6FF',
node_size=700, alpha=0.95)
nx.draw_networkx_labels(G, pos, ax=ax1, font_color='white', font_size=10, font_weight='bold')
nx.draw_networkx_edges(G, pos, ax=ax1, edge_color='#30363D',
arrows=False, width=1.0, alpha=0.5)
nx.draw_networkx_edge_labels(G, pos, edge_labels, ax=ax1,
font_color='#8B949E', font_size=6)

for name, (path, _) in algorithms.items():
if path and len(path) > 1:
path_edges = list(zip(path, path[1:]))
style = PATH_STYLES[name]
nx.draw_networkx_edges(G, pos, edgelist=path_edges, ax=ax1,
edge_color=COLORS[name],
width=style['width'], alpha=0.9,
arrows=True, arrowsize=15,
connectionstyle='arc3,rad=0.1',
style=style['style'])

patches = [mpatches.Patch(color=c, label=n) for n, c in COLORS.items()]
ax1.legend(handles=patches, loc='upper left', fontsize=7,
facecolor='#161B22', edgecolor='#30363D', labelcolor='white')
ax1.axis('off')

# ── Panel 2: Composite cost bar chart ────────────────────────────────────────
ax2 = fig.add_subplot(2, 3, 2)
ax2.set_facecolor('#161B22')
ax2.set_title('Composite Cost Comparison', color='white', fontsize=13, pad=10)

names = list(algorithms.keys())
costs = [algorithms[n][1] for n in names]
colors = [COLORS[n] for n in names]
bars = ax2.bar(names, costs, color=colors, alpha=0.85, edgecolor='white', linewidth=0.5)
for bar, cost in zip(bars, costs):
ax2.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.3,
f'{cost:.3f}', ha='center', va='bottom', color='white', fontsize=9)

ax2.set_ylabel('Composite Cost', color='#8B949E')
ax2.tick_params(colors='#8B949E')
ax2.set_xticklabels(names, rotation=20, ha='right', color='#8B949E', fontsize=8)
for spine in ax2.spines.values():
spine.set_edgecolor('#30363D')
ax2.set_facecolor('#161B22')

# ── Panel 3: Latency vs Packet Loss scatter (all paths) ─────────────────────
ax3 = fig.add_subplot(2, 3, 3)
ax3.set_facecolor('#161B22')
ax3.set_title('All Paths: Latency vs Delivery Rate', color='white', fontsize=13, pad=10)

path_lats, path_delivs, path_costs = [], [], []
for p in all_paths:
lat, _, deliv, comp = path_metrics(G, p)
path_lats.append(lat)
path_delivs.append(deliv * 100)
path_costs.append(comp)

sc = ax3.scatter(path_lats, path_delivs, c=path_costs, cmap='plasma',
alpha=0.6, s=40, edgecolors='none')
cbar = plt.colorbar(sc, ax=ax3)
cbar.set_label('Composite Cost', color='#8B949E', fontsize=8)
cbar.ax.yaxis.set_tick_params(color='#8B949E')
plt.setp(plt.getp(cbar.ax.axes, 'yticklabels'), color='#8B949E')

for name, (path, _) in algorithms.items():
if path:
lat, _, deliv, _ = path_metrics(G, path)
ax3.scatter(lat, deliv*100, color=COLORS[name], s=200,
marker='*', zorder=5, label=name, edgecolors='white', linewidths=0.5)

ax3.set_xlabel('Total Latency (ms)', color='#8B949E')
ax3.set_ylabel('Packet Delivery Rate (%)', color='#8B949E')
ax3.tick_params(colors='#8B949E')
ax3.legend(fontsize=7, facecolor='#161B22', edgecolor='#30363D', labelcolor='white')
for spine in ax3.spines.values():
spine.set_edgecolor('#30363D')

# ── Panel 4: GA & SA convergence history ─────────────────────────────────────
ax4 = fig.add_subplot(2, 3, 4)
ax4.set_facecolor('#161B22')
ax4.set_title('Metaheuristic Convergence', color='white', fontsize=13, pad=10)

ax4.plot(ga_history, color=COLORS['Genetic Algorithm'], lw=2, label='Genetic Algorithm')
ax4.plot(sa_history, color=COLORS['Simulated Annealing'], lw=2, label='Simulated Annealing', alpha=0.85)
ax4.axhline(dijkstra_cost, color=COLORS['Dijkstra'], lw=1.5, linestyle='--', label='Dijkstra (optimal)')

ax4.set_xlabel('Iteration / Generation', color='#8B949E')
ax4.set_ylabel('Best Cost Found', color='#8B949E')
ax4.tick_params(colors='#8B949E')
ax4.legend(fontsize=8, facecolor='#161B22', edgecolor='#30363D', labelcolor='white')
for spine in ax4.spines.values():
spine.set_edgecolor('#30363D')

# ── Panel 5: Edge weight heatmap ─────────────────────────────────────────────
ax5 = fig.add_subplot(2, 3, 5)
ax5.set_facecolor('#161B22')
ax5.set_title('Edge Composite Cost Heatmap', color='white', fontsize=13, pad=10)

n = G.number_of_nodes()
mat = np.full((n, n), np.nan)
for u, v, d in G.edges(data=True):
mat[u][v] = d['composite']

im = ax5.imshow(mat, cmap='YlOrRd', aspect='auto', interpolation='nearest')
cbar2 = plt.colorbar(im, ax=ax5)
cbar2.set_label('Composite Cost', color='#8B949E', fontsize=8)
cbar2.ax.yaxis.set_tick_params(color='#8B949E')
plt.setp(plt.getp(cbar2.ax.axes, 'yticklabels'), color='#8B949E')

ax5.set_xticks(range(n)); ax5.set_yticks(range(n))
ax5.set_xticklabels(range(n), color='#8B949E')
ax5.set_yticklabels(range(n), color='#8B949E')
ax5.set_xlabel('Destination Node', color='#8B949E')
ax5.set_ylabel('Source Node', color='#8B949E')
for spine in ax5.spines.values():
spine.set_edgecolor('#30363D')

# ── Panel 6: 3D Surface — Latency × Loss → Composite Cost ───────────────────
ax6 = fig.add_subplot(2, 3, 6, projection='3d')
ax6.set_facecolor('#0D1117')
ax6.set_title('3D: Latency × Loss → Cost Surface', color='white', fontsize=13, pad=10)

lat_range = np.linspace(5, 50, 60)
loss_range = np.linspace(0.001, 0.05, 60)
LL, PL = np.meshgrid(lat_range, loss_range)
alpha_w, beta_w, scale = 0.7, 0.3, 100.0
ZZ = alpha_w * LL + beta_w * (-np.log(1 - PL)) * scale

surf = ax6.plot_surface(LL, PL * 100, ZZ, cmap='viridis', alpha=0.85,
rstride=2, cstride=2, linewidth=0, antialiased=True)

# Plot actual edges as scatter on the surface
for u, v, d in G.edges(data=True):
ax6.scatter(d['latency'], d['packet_loss']*100, d['composite'],
color='#FF6B6B', s=30, zorder=5, alpha=0.9)

ax6.set_xlabel('Latency (ms)', color='#8B949E', fontsize=8, labelpad=6)
ax6.set_ylabel('Packet Loss (%)', color='#8B949E', fontsize=8, labelpad=6)
ax6.set_zlabel('Composite Cost', color='#8B949E', fontsize=8, labelpad=6)
ax6.tick_params(colors='#8B949E', labelsize=7)
ax6.xaxis.pane.fill = False; ax6.yaxis.pane.fill = False; ax6.zaxis.pane.fill = False
ax6.xaxis.pane.set_edgecolor('#30363D')
ax6.yaxis.pane.set_edgecolor('#30363D')
ax6.zaxis.pane.set_edgecolor('#30363D')
ax6.grid(color='#30363D', linestyle='--', linewidth=0.4, alpha=0.5)

fig.colorbar(surf, ax=ax6, shrink=0.5, pad=0.1).ax.yaxis.set_tick_params(color='#8B949E')

plt.suptitle('Network Routing Optimization — Minimizing Latency & Packet Loss',
color='white', fontsize=16, fontweight='bold', y=1.01)
plt.tight_layout()
plt.savefig('network_routing_optimization.png', dpi=150,
bbox_inches='tight', facecolor='#0D1117')
plt.show()
print("Figure saved.")

Code Walkthrough

1. Network Construction (build_network)

The network is modeled as a directed graph (nx.DiGraph) with 10 nodes. Each edge carries three attributes: raw latency (ms), raw packet loss (probability), and a composite cost computed as:

$$C(e) = \alpha \cdot L(e) + \beta \cdot \left(-\log(1 - p_e)\right) \cdot S$$

The log transform converts the multiplicative packet-loss metric into an additive one that shortest-path algorithms can handle. With $\alpha=0.7$, $\beta=0.3$, and $S=100$, we weight latency slightly more than loss while keeping both in a comparable numeric range.


2. Dijkstra’s Algorithm

1
dijkstra_result, dijkstra_cost = dijkstra_path(G, SOURCE, TARGET)

NetworkX’s dijkstra_path runs in $O((V + E) \log V)$ using a binary heap. It is guaranteed to find the globally optimal path given non-negative edge weights — which our composite cost always satisfies.


3. Bellman-Ford

1
bf_result, bf_cost = bellman_ford_path(G, SOURCE, TARGET)

Bellman-Ford runs in $O(VE)$ and can handle negative edge weights, making it useful when cost models involve penalties that can go negative. Here it serves as a cross-validation reference. Both algorithms should agree on the optimal path.


4. Genetic Algorithm

The GA works over the set of all simple paths found by nx.all_simple_paths. This is the search space. Each path is a “chromosome.” The algorithm:

  • Selects the top 25% elite paths as parents each generation
  • Applies random mutation (replacing a child with a random path at probability mutation_rate)
  • Runs for 80 generations with population size 30

The fitness function is directly the composite cost. The GA converges toward the optimum but is not guaranteed to reach it — that’s the inherent trade-off for flexibility in more complex cost landscapes.


5. Simulated Annealing

SA accepts worse solutions probabilistically with probability:

$$P(\text{accept}) = e^{-\Delta C / T}$$

where $\Delta C$ is the cost increase and $T$ is the current temperature. Temperature decreases each iteration by a cooling factor of $0.95$. This allows the algorithm to escape local optima early in the search and converge later.


Graph Explanations

Panel 1 — Network Topology & Paths

Each algorithm’s discovered path is drawn with a distinct color and line style over the network graph. Edge labels show latency and packet loss per link. This gives immediate visual intuition for which routes each algorithm prefers.

Panel 2 — Composite Cost Bar Chart

A direct comparison of the final composite cost achieved by each algorithm. Dijkstra and Bellman-Ford should match exactly. GA and SA may or may not converge to the same optimum depending on the random search.

Panel 3 — Latency vs. Delivery Rate Scatter

All enumerated simple paths from node 0 to node 9 are plotted. Color encodes composite cost. Star markers highlight the four algorithms’ chosen paths. This reveals the Pareto frontier trade-off: lower latency paths often suffer higher loss, and vice versa.

Panel 4 — Convergence History

The GA and SA best-found cost per iteration is plotted alongside the Dijkstra optimal (dashed red). A well-behaved metaheuristic should converge toward the Dijkstra line. Seeing how many iterations it takes — and whether it reaches optimality — reveals the algorithm’s efficiency.

Panel 5 — Edge Composite Cost Heatmap

A node×node matrix showing the direct composite cost of each edge. NaN cells (white/empty) indicate no direct link. This is useful for identifying high-cost bottleneck links in the topology.

Panel 6 — 3D Cost Surface

The most visually striking panel: a smooth surface where the X-axis is link latency, Y-axis is packet loss %, and Z-axis is composite cost. The red dots are the actual network edges plotted on this surface.

$$Z = 0.7 \cdot L + 0.3 \cdot (-\log(1-p)) \times 100$$

This surface shows clearly that packet loss grows the composite cost super-linearly (due to the log transform), while latency contributes linearly. Links with even moderate loss (>3%) become disproportionately expensive.


Execution Results

=================================================================
Algorithm              Path                          Composite
=================================================================
Dijkstra               0 → 1 → 4 → 8 → 9               39.6183
Bellman-Ford           0 → 1 → 4 → 8 → 9               39.6183
Genetic Algorithm      0 → 1 → 4 → 8 → 9               39.6183
Simulated Annealing    0 → 1 → 4 → 8 → 9               39.6183
=================================================================

── Detailed Metrics ──
Algorithm               Latency(ms)  Loss(sum)  Delivery%
----------------------------------------------------------
Dijkstra                       54.0     0.0600     94.12%
Bellman-Ford                   54.0     0.0600     94.12%
Genetic Algorithm              54.0     0.0600     94.12%
Simulated Annealing            54.0     0.0600     94.12%

Figure saved.

Key Takeaways

  • Dijkstra is the gold standard for this problem class: optimal, fast, and deterministic.
  • Bellman-Ford confirms the result and extends to negative-weight scenarios.
  • GA and SA are valuable when the cost function is non-convex, multi-objective, or involves constraints that break traditional shortest-path assumptions.
  • The log transform on packet loss is critical: without it, a path with 5% loss per hop across 5 hops would compute additive loss of 25% — masking the true compounding effect. With the transform, delivery probability becomes additive in log-space, giving mathematically correct path evaluation.
  • The 3D surface makes it visually clear why packet loss is often the dominant term in real network routing decisions.