In doing a performance comparison between several linear algebra libraries I had to read in several large (more than 21 million non zero values) sparse matrices. I’m not going to claim that this is the fastest way to read in a matrix that is stored on disk, but for me it was fast enough.

### The Data Structure

This struct contains three std::vectors which store the row, column and value entries from each line in the file. Some assumptions are made on the matrix, namely that there are no rows will all zero entries and that the lass column with data is the last column in the matrix. If your matrix is larger than this then you will need to manually modify the data structure that you store your matrix into. The matlab ascii sparse matrix format does not store the number of rows and columns Reference.

```
#include <iostream>
#include <vector>
#include <algorithm>
struct COO {
std::vector<size_t> row; // Row entries for matrix
std::vector<size_t> col; // Column entries for matrix
std::vector<double> val; // Values for the non zero entries
unsigned int num_rows; // Number of Rows
unsigned int num_cols; // Number of Columns
unsigned int num_nonzero; // Number of non zeros
// Once the data has been read in, compute the number of rows, columns, and nonzeros
void update() {
num_rows = row.back();
num_cols = *std::max_element(col.begin(), col.end());
num_nonzero = val.size();
std::cout << "COO Updated: [Rows, Columns, Non Zeros] [" << num_rows << ", " << num_cols << ", " << num_nonzero << "] " << std::endl;
}
};
```