What is the right database technology for this simple outlined BI tool use case?

Reaching out to the community to pressure test our internal thinking.

We are building a simplified business intelligence platform that will aggregate metrics (i.e. traffic, backlinks) and text list (i.e search keywords, used technologies) from several data providers.

The data will be somewhat loosely structured and may change over time with vendors potentially changing their response formats.

Data volume may be long term 100,000 rows x 25 input vectors.

Data would be updated and read continuously but not at massive concurrent volume.

We’d expect to need to do some ETL transformations on the gathered data from partners along the way to the UI (e.g show trending information over the past five captured data points).

We’d want to archive every single data snapshot (i.e. version it) vs just storing the most current data point.

The persistence technology should be readily available through AWS.

Our assumption is our requirements lend themselves best towards DynamoDB (vs Amazon Neptune or Redshift or Aurora).

Is that fair to assume? Are there any other questions / information I can provide to elicit input from this community?

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.