The following Python 2.7 script calls the GA-LMS (standard) binary to perform a system identification task.
In this example, the multivectors samples belong to the Geometric Algebra of $\mathbb{R}^3$, namely $\mathcal{G}(\mathbb{R}^3)$. Thus, each regressor and weight vector entry has 8 coefficients, i.e., for each entry of the weight vector, 8 coefficients have to be estimated. This constrasts with the usal LMS which only estimates real/complex entries. For further information, please refer to the GA documentation at www.openga.org.
At the end, the learning curves Mean-Square Error (MSE) and/or Excess Mean-Square Error (EMSE) are plotted.
Start by importing the necessary Python modules:
# Script to call the GAAFs binaries.
# Case: GA-LMS standard with multivector entries
#
# Wilder Lopes - wil@openga.org
# Dec 2015
import sys, string, os
import matplotlib.pyplot as plt
import numpy as np
from math import log
The user is able to set the following AF parameters:
Number of filter taps (system order): M
Realizations: L
Time iterations: N
AF step size: mu
Measurement Noise variance: sigma2v
# Simulation parameters
M = 5 # System order
L = 100 # Realizations
N = 1000 # Time iterations
mu = 0.005 # AF Step size
sigma2v = 1e-3 # Variance of measurement noise
The binary is called below using the previously set parameters. The GA-LMS runs and returns .txt files with the results: _galms.out and _theory.out, where represents "MSE" or "EMSE". _galms.out files store the ensemble-average learning curves (EMSE), while *_theory.out files store the theoretical steady-state value for MSE and EMSE.
# Calling binary
arguments = " " + str(M) + " " + str(L) + " " + str(N) + " " + str(mu)+ \
" " + str(sigma2v)
os.system("../GAAFs_Standard/GA-LMS/build/GA-LMS" + arguments)
Show final estimate for weight array:
w_galms = open('w_galms.out', 'r')
print 'Final weight array in GAALET notation:' + '\n'
for line in w_galms:
print line
Load file MSE_galms.out and MSE_theory.out to plot MSE learning curve and theoretical curve:
f1 = open('MSE_galms.out', 'r')
data_label1 = ['MSE_galms']
data1_list = []
for line in f1:
data1_list.append(line.rstrip('\n'))
f2 = open('MSE_theory.out', 'r')
data_label2 = ['MSE_theory']
data2_list = []
for line in f2:
for i in range(len(data1_list)):
data2_list.append(line.rstrip('\n'))
data1 = [float(j) for j in data1_list] # Converts to float
data2 = [float(j) for j in data2_list] # Converts to float
data1_dB = [10*log(x,10) for x in data1]
data2_dB = [10*log(x,10) for x in data2]
plt.title('MSE curves')
plt.ylabel('MSE (dB)')
plt.xlabel('Iterations')
plt.plot(data1_dB, label = 'MSE_galms', color = 'blue')
plt.plot(data2_dB, label = 'MSE_theory', color = 'magenta')
plt.legend()
plt.show()
Load file EMSE_galms.out and EMSE_theory.out to plot EMSE learning curve and theoretical curve:
f1 = open('EMSE_galms.out', 'r')
data_label1 = ['EMSE_galms']
data1_list = []
for line in f1:
data1_list.append(line.rstrip('\n'))
f2 = open('EMSE_theory.out', 'r')
data_label2 = ['EMSE_theory']
data2_list = []
for line in f2:
for i in range(len(data1_list)):
data2_list.append(line.rstrip('\n'))
data1 = [float(j) for j in data1_list] # Converts to float
data2 = [float(j) for j in data2_list] # Converts to float
data1_dB = [10*log(x,10) for x in data1]
data2_dB = [10*log(x,10) for x in data2]
plt.title('EMSE curves')
plt.ylabel('EMSE (dB)')
plt.xlabel('Iterations')
plt.plot(data1_dB, label = 'EMSE_galms', color = 'r')
plt.plot(data2_dB, label = 'EMSE_theory', color = 'magenta')
plt.legend()
plt.show()