1

I have a java program which writes rows into my GridDB table via the NoSQL API. I am properly catching errors and I see in the logs that the data is being stored into my container via multiput, but when I go into the CLI to view the contents of the data, the container is completely empty?

First, here's what the container looks like:

gs[public]> showcontainer LOG_agent_intrusion_exploit
Database    : public
Name        : LOG_agent_intrusion_exploit
Type        : COLLECTION
Partition ID: 22
DataAffinity: -
Partitioned : true
Partition Type           : INTERVAL
Partition Column         : timestamp
Partition Interval Value : 30
Partition Interval Unit  : DAY
Expiration Type      : PARTITION
Expiration Time      : 30
Expiration Time Unit : DAY

Columns: No Name Type CSTR RowKey


0 timestamp TIMESTAMP(3) NN 1 username STRING 2 incomingIP STRING 3 serverIP STRING 4 mtu INTEGER 5 statusCode INTEGER 6 cacheHit STRING 7 method STRING 8 url STRING 9 urlPrefix STRING 10 urlSuffix STRING 11 httpVersion STRING 12 service STRING 13 riskLevel STRING 14 headerContentType STRING 15 bytesReceived INTEGER 16 bytesSent INTEGER 17 headerAgent STRING 18 url2 STRING 19 url2Prefix STRING 20 url2Suffix STRING 21 meta1 STRING 22 meta2 STRING 23 meta3 STRING 24 meta4 STRING

My log parser is creating a hashmap with the container names as the key name, gathering up all rows, and then using store.multiPut to push to GridDB. for this function, I am doing a simple try/catch block which normally does catch the GridDB errors.

Quick snippet:

                for (RawLog log : logs) {
                    try {
                        System.out.println(log.logtype + "~~~~~~");
                        System.out.println("configs.get(log.logtype)" + configs.get(log.logtype));
                        Row row = lp.patternMatcher(proc_container, log, configs.get(log.logtype));
                        if (row != null) {
                            proc_logs.add(row);
                            System.out.println("parsing this row: " + row);
                        } else
                            System.out.println("Could not parse " + log);
                    } catch (Exception e) {
                        e.printStackTrace();
                        System.out.println("Could not parse " + log);
                    }
                }
            containerRowsMap.put(proc_container, proc_logs);

        }
        try {
            db.store.multiPut(containerRowsMap);
        } catch (Exception e) {
            System.out.println("Error with inserting data");
            e.printStackTrace();
        }

So, in this case, it seems as though data should be in my table, but when I run a simple query select * from LOG_agent_intrusion_exploit;, i get zero rows back.

I tried foregoing multiput and inserting one row at a time but I get the exact same behavior.

L. Connell
  • 69
  • 6

1 Answers1

0

UPDATE: I was able to figure this one out. As it turns out, I had some expiry rules set for 30 days (as in my data rows expire every 30 days) but was ingesting data much older than that (~10 years). And so GridDB was ingesting the data and it was there, it's just that the GridDB cluster was doing its job in expiring the rows, making them unreadable (and possibly deleted, not sure how quickly that process occurs)

So the solution was simply to remove the expiry rules or make them long enough that ten year old data wasn't immediately flushed out.

L. Connell
  • 69
  • 6