I am wondering about the mathematical implications of concatenating very small windows from different trials. All of the trials are the same size, and are for the same period of time after a stimuli.
In other words, let's say my data has 3 channels, 6 trials, and 10 data points per trial ( size of data [3x10x6]). What I want to build is a new data set with the same 3 channels, but with concatenating the windows of 2 datapoints for all of the trials, and then repeating this for the all of the possible number of windows. In this example it would be 5 windows (size of new data [3x12x5]).
I assume that any frequency analysis will be pointless due to dramatic changes in the signal, but what other implications to the data should I be concerned about? Would it still be reasonable to remove trending or denormalize this new data?