--- //[[luong97@hawaii.edu|Kenny Luong]] 2020/04/27 18:21// For the moment, this is just discussion. The more I think about things, the more I think the gateway should be a lot more hands off with the type of data that it's storing in the backend. By have a generic table with room for generic sensor readings, it takes quite a bit of load off the folks who are maintaining the gateway; there's no longer any need to run migrations when a new type of sensor module is added. We currently maintain one schema for each generation. Here is what a table could look like: * timestamp * mac_address * node_id * node_version * data_sample_1 * data_sample_2 * data_sample_3 * data_sample_4 * data_sample_5 * data_sample_6 * data_sample_n Samples that are not valid can be nullified - they don't take up much space: https://stackoverflow.com/questions/12145772/do-nullable-columns-occupy-additional-space-in-postgresql/12147130 The data frames that carry sensor readings from end devices can be constructed as a mostly 1:1 mapping to the database schema. **Downsides**: there is a downside to this approach - the data itself is not very approachable. you'd have to lookup the details in the schema to understand which samples represent which data. Idea: could we use a view to make the elements more readable? **Upsides** the upside is that it will be much easier for folks to inject new types of data into the gateway; for example, taking sensor readings over time for something. **Other Ideas** It would be nice to have another table where we could store raw frames, in case there is a processing error somewhere.