Просмотр исходного кода

Update SVDF unit tests (#67)

SVDF int8 is using tflite_micro interpreter for reference data. To
accommodate this the generate script is refactored and split into
smaller files.
Måns Nilsson 2 лет назад
Родитель
Сommit
fdbbcbe622
52 измененных файлов с 2215 добавлено и 1958 удалено
  1. 13 5
      Tests/UnitTest/README.md
  2. 3 2
      Tests/UnitTest/TestCases/TestData/svdf/biases_data.h
  3. 2 1
      Tests/UnitTest/TestCases/TestData/svdf/config_data.h
  4. 3 2
      Tests/UnitTest/TestCases/TestData/svdf/input_sequence_data.h
  5. 3 2
      Tests/UnitTest/TestCases/TestData/svdf/output_ref_data.h
  6. 2 1
      Tests/UnitTest/TestCases/TestData/svdf/state_data.h
  7. 2 1
      Tests/UnitTest/TestCases/TestData/svdf/test_data.h
  8. 7 6
      Tests/UnitTest/TestCases/TestData/svdf/weights_feature_data.h
  9. 12 12
      Tests/UnitTest/TestCases/TestData/svdf/weights_time_data.h
  10. 3 2
      Tests/UnitTest/TestCases/TestData/svdf_1/biases_data.h
  11. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_1/config_data.h
  12. 5 4
      Tests/UnitTest/TestCases/TestData/svdf_1/input_sequence_data.h
  13. 3 2
      Tests/UnitTest/TestCases/TestData/svdf_1/output_ref_data.h
  14. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_1/state_data.h
  15. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_1/test_data.h
  16. 5 4
      Tests/UnitTest/TestCases/TestData/svdf_1/weights_feature_data.h
  17. 3 2
      Tests/UnitTest/TestCases/TestData/svdf_1/weights_time_data.h
  18. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_2/biases_data.h
  19. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_2/config_data.h
  20. 5 4
      Tests/UnitTest/TestCases/TestData/svdf_2/input_sequence_data.h
  21. 3 2
      Tests/UnitTest/TestCases/TestData/svdf_2/output_ref_data.h
  22. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_2/state_data.h
  23. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_2/test_data.h
  24. 7 6
      Tests/UnitTest/TestCases/TestData/svdf_2/weights_feature_data.h
  25. 4 3
      Tests/UnitTest/TestCases/TestData/svdf_2/weights_time_data.h
  26. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_3/biases_data.h
  27. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_3/config_data.h
  28. 5 4
      Tests/UnitTest/TestCases/TestData/svdf_3/input_sequence_data.h
  29. 3 2
      Tests/UnitTest/TestCases/TestData/svdf_3/output_ref_data.h
  30. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_3/state_data.h
  31. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_3/test_data.h
  32. 14 13
      Tests/UnitTest/TestCases/TestData/svdf_3/weights_feature_data.h
  33. 4 3
      Tests/UnitTest/TestCases/TestData/svdf_3/weights_time_data.h
  34. 1 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/biases_data.h
  35. 3 3
      Tests/UnitTest/TestCases/TestData/svdf_int8/config_data.h
  36. 1 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/input_sequence_data.h
  37. 6 0
      Tests/UnitTest/TestCases/TestData/svdf_int8/output_ref_data.h
  38. 1 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/state_data.h
  39. 2 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/test_data.h
  40. 1 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/weights_feature_data.h
  41. 1 1
      Tests/UnitTest/TestCases/TestData/svdf_int8/weights_time_data.h
  42. 4 0
      Tests/UnitTest/TestCases/test_arm_svdf_s8/test_arm_svdf_s8.c
  43. 160 0
      Tests/UnitTest/add_mul_settings.py
  44. 204 0
      Tests/UnitTest/conv_settings.py
  45. 173 0
      Tests/UnitTest/fully_connected_settings.py
  46. 14 1850
      Tests/UnitTest/generate_test_data.py
  47. 409 0
      Tests/UnitTest/lstm_settings.py
  48. 6 5
      Tests/UnitTest/model_extractor.py
  49. 128 0
      Tests/UnitTest/pooling_settings.py
  50. 163 0
      Tests/UnitTest/softmax_settings.py
  51. 256 0
      Tests/UnitTest/svdf_settings.py
  52. 549 0
      Tests/UnitTest/test_settings.py

+ 13 - 5
Tests/UnitTest/README.md

@@ -53,13 +53,17 @@ Remember to add the built flatc binary to the path.
 
 For schema file download [schema.fbs](https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/lite/schema/schema.fbs).
 
-#### Using tflite_runtime
-Python package tensorflow is always needed however the script has the option to use tflite_runtime for the interpreter, which will generate the actual reference output. Python package tflite_runtime can be installed with pip and it can also be built locally. Check this [link](https://www.tensorflow.org/lite/guide/build_cmake_pip) on how to do that.
-To use the tflite_runtime the script currently has to be modified.
+#### Interpreter for generating reference output
+Python package tensorflow is always needed however the script has the option to use other interpreters than the tensorflow default, which will generate the actual reference output.
 
-## Getting started
+##### tflite_runtime
+Python package tflite_runtime can be installed with pip and it can also be built locally. Check this [link](https://www.tensorflow.org/lite/guide/build_cmake_pip) on how to do that.
+Use the -h flag to get more info on supported interpreters.
 
+##### tflite_micro
+This interpreter is partially supported. See this comment for more info: https://github.com/tensorflow/tflite-micro/issues/1484#issuecomment-1677842603.
 
+## Getting started
 
 ### Using Arm Mbed OS supported hardware
 
@@ -122,7 +126,11 @@ When adding a new test data set, new c files should be added or existing c files
 
 The steps to add a new unit test are as follows. Add a new test test in the load_all_testdatasets() function. Run the generate script with that new test set as input. Add the new generated header files to an existing or new unit test.
 
-### Tests depending on specific TFL versions or patched TFL version
+### Tests depending on specific TFL versions, patched TFL version or TFLM interpreter
+
+#### SVDF INT8
+This tests is depending on tflite_micro for its reference data. This is because the operator is only supported by TFLM.
+Note that tflite_micro interpreter is currently only supported for SVDF.
 
 #### LSTM
 

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf/biases_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int32_t svdf_biases[3] = {-53, -125, 86};
+const int32_t svdf_biases[3] = {50, 9, 0};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf/config_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #define SVDF_MULTIPLIER_IN 1717987072
 #define SVDF_MULTIPLIER_OUT 1099511552

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf/input_sequence_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_input_sequence[12] = {-16, -111, -35, -39, -102, -89, 12, -117, 44, -73, -104, 113};
+const int8_t svdf_input_sequence[12] = {112, -96, -125, 39, -126, 37, 54, -118, 21, -30, 121, -41};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf/output_ref_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_output_ref[6] = {95, 63, -22, 80, 38, 61};
+const int8_t svdf_output_ref[6] = {73, 52, -128, -128, -16, 61};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf/state_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf/test_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #include "biases_data.h"
 #include "config_data.h"
 #include "input_sequence_data.h"

+ 7 - 6
Tests/UnitTest/TestCases/TestData/svdf/weights_feature_data.h

@@ -1,9 +1,10 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_weights_feature[72] = {43,  34,  -69, 80,   -11,  -78, -73, -60, 3,   -98, -111, -40, -102, 15, -99,
-                                         -96, 64,  1,   -120, 117,  94,  -6,  -43, 80,  101, 25,   41,  71,   19, -76,
-                                         55,  -49, -92, 98,   -58,  4,   126, 65,  112, -22, 65,   -86, 38,   64, -86,
-                                         -39, 65,  6,   -97,  -116, 15,  -84, 122, 40,  18,  40,   -78, 84,   63, -35,
-                                         -6,  11,  69,  -45,  51,   80,  -9,  17,  102, 63,  4,    11};
+const int8_t svdf_weights_feature[72] = {
+    29,  -112, 89,  -105, 11, -96, -85, -2,   -24,  114, 75,  -109, 84,  82,  118,  -121, 47,  -11,
+    20,  -37,  111, -87,  22, 71,  14,  99,   93,   -96, -65, 61,   115, -48, 125,  -68,  -62, 59,
+    100, -47,  38,  -23,  30, -54, -64, 0,    -112, -63, -26, 96,   -95, 105, -99,  93,   108, 104,
+    104, -91,  12,  -57,  7,  -13, -13, -124, 70,   -49, 17,  -60,  111, 105, -114, 59,   -81, 55};

+ 12 - 12
Tests/UnitTest/TestCases/TestData/svdf/weights_time_data.h

@@ -1,16 +1,16 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
 const int16_t svdf_weights_time[192] = {
-    -55,  105, 35,  20,   -20,  -123, -38,  92,   88,  13,  119, 14,   -108, -118, -11,  91,   -101, -115,
-    -82,  -6,  -12, 63,   -1,   107,  -24,  2,    67,  85,  -9,  109,  49,   64,   -4,   97,   -110, 18,
-    3,    87,  -52, -56,  -98,  -106, 116,  -29,  100, -93, -46, -8,   -19,  -63,  -58,  -120, 47,   87,
-    -101, 21,  78,  -39,  -77,  -51,  -101, -106, -28, 82,  30,  -35,  16,   105,  -106, 124,  105,  34,
-    -117, 104, 74,  -114, -108, 34,   -111, 32,   8,   -44, -41, -20,  -104, 113,  -63,  116,  -101, 114,
-    61,   125, 103, 20,   -111, 123,  93,   123,  -5,  91,  32,  111,  -116, -1,   35,   69,   -59,  -105,
-    20,   -38, 88,  -3,   65,   -115, 78,   -39,  -49, 121, 23,  -122, 76,   -125, -66,  92,   101,  62,
-    -103, 24,  68,  -104, -82,  -114, -81,  124,  32,  47,  111, 75,   -74,  1,    -90,  61,   106,  123,
-    74,   120, 51,  56,   80,   6,    21,   -100, -35, 119, 11,  -88,  -45,  73,   117,  -79,  -58,  -73,
-    -66,  36,  -54, 11,   -84,  -64,  -87,  -109, 87,  106, -98, 31,   -48,  -47,  -93,  125,  24,   104,
-    -9,   37,  29,  95,   -47,  77,   9,    63,   79,  -92, 82,  22};
+    39,   -11,  13,   -41, -101, -106, -11,  93,   27,   -10,  100,  11,   -128, 13,   125,  -98, 93,  27,  -80, 80,
+    -120, 55,   -37,  27,  24,   41,   -72,  101,  -17,  -21,  116,  -32,  103,  33,   -19,  3,   38,  33,  -7,  -115,
+    -43,  -94,  -77,  -73, -52,  -5,   42,   53,   -22,  105,  -52,  -121, -53,  -49,  -37,  91,  52,  -65, 15,  -58,
+    108,  45,   84,   28,  8,    46,   20,   -101, -6,   -63,  -53,  -105, -74,  -72,  28,   -6,  -30, -88, 84,  67,
+    39,   -92,  -115, -66, 100,  52,   -41,  78,   -41,  115,  42,   42,   -51,  -115, 34,   -89, 11,  35,  71,  64,
+    10,   -30,  -22,  -1,  -23,  -67,  43,   -23,  -110, 43,   42,   105,  109,  74,   -12,  -35, 58,  4,   53,  122,
+    45,   83,   118,  -30, -34,  54,   -111, -60,  28,   -50,  33,   94,   121,  54,   -49,  -32, -56, 27,  95,  29,
+    46,   -86,  -3,   82,  -85,  -119, 90,   -31,  17,   4,    -118, -90,  -116, 65,   -119, -66, -86, 93,  2,   93,
+    6,    -124, 125,  -54, -36,  -85,  39,   -15,  -21,  123,  -44,  -51,  -30,  86,   79,   -16, 70,  -38, -15, 8,
+    33,   -94,  -7,   77,  -17,  -36,  -113, -95,  48,   -121, -92,  -119};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf_1/biases_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int32_t svdf_1_biases[5] = {-16, -33, 37, 112, 20};
+const int32_t svdf_1_biases[5] = {29, 18, 81, 1, -5};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_1/config_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #define SVDF_1_MULTIPLIER_IN 1717987072
 #define SVDF_1_MULTIPLIER_OUT 1099511552

+ 5 - 4
Tests/UnitTest/TestCases/TestData/svdf_1/input_sequence_data.h

@@ -1,7 +1,8 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_1_input_sequence[42] = {-116, 46,  -102, 125, 27,  -49, -64,  72, 13, 117, -119, 79,  -94,  1,
-                                          -77,  31,  38,   -15, 9,   65,  -102, 30, 10, -30, 16,   -52, 64,   106,
-                                          31,   -59, -78,  -14, -68, 99,  3,    53, 7,  109, -3,   -77, -126, -14};
+const int8_t svdf_1_input_sequence[42] = {-60,  25, 18,  58,  -49,  -118, 95,  33, -72, 25,  -76, 78,  -118, 126,
+                                          -102, 96, -17, -92, -121, -55,  -31, 48, -60, 48,  32,  69,  2,    -68,
+                                          56,   94, -89, 51,  50,   43,   8,   67, 74,  -63, -50, -42, -84,  -43};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf_1/output_ref_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_1_output_ref[15] = {5, 6, 28, 58, -57, 12, -27, 8, -52, 63, -14, 18, -35, 37, -50};
+const int8_t svdf_1_output_ref[15] = {46, -105, -20, -24, -1, 105, 61, 11, 26, -20, -106, 22, -43, -26, 10};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_1/state_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_1/test_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #include "biases_data.h"
 #include "config_data.h"
 #include "input_sequence_data.h"

+ 5 - 4
Tests/UnitTest/TestCases/TestData/svdf_1/weights_feature_data.h

@@ -1,7 +1,8 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_1_weights_feature[35] = {83, 21,  61, 103, 52,   2,   -28, 6,   38,   -10,  25,  -83,
-                                           92, -64, 48, 66,  -4,   57,  40,  108, 39,   -109, -56, -91,
-                                           63, 118, 76, -21, -126, 104, -94, -70, -116, 60,   -82};
+const int8_t svdf_1_weights_feature[35] = {-81,  21, 73,   -122, -91, 43,  -93, 60,  126, -13, 99,   -74,
+                                           -44,  95, -90,  -58,  -50, -29, 90,  65,  25,  -41, -123, 24,
+                                           -100, 23, -113, -88,  48,  -20, 116, -67, 52,  -19, 46};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf_1/weights_time_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int16_t svdf_1_weights_time[10] = {-49, -88, 69, 14, 18, 96, 80, -19, -117, -38};
+const int16_t svdf_1_weights_time[10] = {-127, -110, -109, 125, 97, 59, -84, -106, -39, -11};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_2/biases_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_2/config_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #define SVDF_2_MULTIPLIER_IN 1717987072
 #define SVDF_2_MULTIPLIER_OUT 1099511552

+ 5 - 4
Tests/UnitTest/TestCases/TestData/svdf_2/input_sequence_data.h

@@ -1,7 +1,8 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_2_input_sequence[42] = {
-    29,  81, -38, 17,   -116, 43,   119, -127, 74,   115, 9,   118, 7,   -56,  -53, -14, -98, 60, -128, 10, 28,
-    -18, 12, -28, -126, 87,   -115, -44, -123, -109, -59, -87, -69, 121, -128, -95, -70, 2,   81, -119, 84, -122};
+const int8_t svdf_2_input_sequence[42] = {-83, 20,  39,  85,  7,   70,   -62, 109, 26,  -115, -37, 104, -113, -89,
+                                          -85, -30, 108, 9,   -51, -124, 109, 23,  17,  -58,  58,  89,  -69,  4,
+                                          -16, -2,  -64, 122, 79,  -5,   126, -71, -48, 15,   -28, 111, 81,   75};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf_2/output_ref_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_2_output_ref[15] = {-53, 45, 27, -24, -53, 26, -82, -38, 11, -85, 94, -16, -32, 31, 4};
+const int8_t svdf_2_output_ref[15] = {60, 6, 78, -31, -20, -60, 103, -6, -33, 105, -60, 30, 4, -50, -101};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_2/state_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_2/test_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #include "biases_data.h"
 #include "config_data.h"
 #include "input_sequence_data.h"

+ 7 - 6
Tests/UnitTest/TestCases/TestData/svdf_2/weights_feature_data.h

@@ -1,9 +1,10 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_2_weights_feature[70] = {27,   82,  -108, -127, 85,  3,   -51, 32,  110, -6,  -14, -16,  31,  101,
-                                           -122, 19,  76,   74,   -80, 12,  -22, -17, 10,  -28, 55,  109,  2,   -107,
-                                           -4,   72,  -65,  -59,  36,  -69, 105, -97, 25,  38,  110, -121, -88, -126,
-                                           -14,  16,  -88,  -66,  3,   -93, 69,  -64, 44,  103, 95,  -95,  68,  -46,
-                                           106,  -31, -63,  23,   -38, 36,  -95, -43, 93,  77,  91,  -26,  33,  59};
+const int8_t svdf_2_weights_feature[70] = {89,  -63,  93,  -59, -108, -25, 74,  -63,  104,  24,   126, -56, 100, 64,
+                                           88,  61,   -6,  76,  111,  39,  73,  -79,  37,   -2,   10,  -99, 31,  -125,
+                                           53,  -94,  114, -86, 119,  -56, 109, -124, 116,  -113, 38,  -63, 28,  -32,
+                                           -26, -61,  -85, 66,  109,  -51, 39,  -112, -123, -22,  62,  62,  102, -62,
+                                           65,  -107, -75, 107, -86,  53,  -50, 77,   62,   -29,  -51, 109, 43,  -95};

+ 4 - 3
Tests/UnitTest/TestCases/TestData/svdf_2/weights_time_data.h

@@ -1,6 +1,7 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int16_t svdf_2_weights_time[20] = {-31, -88, -10, -72, -119, -6, -70, 63,  -10, 93,
-                                         5,   42,  -6,  22,  6,    51, 37,  -38, 5,   117};
+const int16_t svdf_2_weights_time[20] = {-99, 75, 86, 3,   78,  18,  -70, -103, -49, 110,
+                                         124, 7,  29, -52, -10, -67, 1,   18,   111, -86};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_3/biases_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_3/config_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #define SVDF_3_MULTIPLIER_IN 1717987072
 #define SVDF_3_MULTIPLIER_OUT 1099511552

+ 5 - 4
Tests/UnitTest/TestCases/TestData/svdf_3/input_sequence_data.h

@@ -1,7 +1,8 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_3_input_sequence[40] = {-64, -83, 38, 1,   119, -124, -53,  -63, 52,   -23, -22, -122, -79, -22,
-                                          80,  87,  14, -3,  107, -67,  57,   -2,  -104, 97,  62,  -26,  101, -126,
-                                          -53, 10,  23, -32, 28,  -68,  -108, 100, -8,   117, -72, -26};
+const int8_t svdf_3_input_sequence[40] = {-78, 32, 72,  -36, -40,  21,  122, -22,  -76, 114,  -122, -115, -102, 55,
+                                          70,  87, 120, -51, 124,  -39, -8,  -116, -57, -100, -43,  -18,  13,   30,
+                                          -43, 76, 80,  29,  -111, -60, 71,  103,  75,  -84,  -79,  -35};

+ 3 - 2
Tests/UnitTest/TestCases/TestData/svdf_3/output_ref_data.h

@@ -1,5 +1,6 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int8_t svdf_3_output_ref[12] = {20, 74, 56, 65, -23, 57, -49, -128, 2, 109, 72, -13};
+const int8_t svdf_3_output_ref[12] = {43, -46, 71, 45, -89, 71, -105, 59, 111, 127, 43, 127};

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_3/state_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_3/test_data.h

@@ -1,4 +1,5 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #include "biases_data.h"
 #include "config_data.h"
 #include "input_sequence_data.h"

+ 14 - 13
Tests/UnitTest/TestCases/TestData/svdf_3/weights_feature_data.h

@@ -1,17 +1,18 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
 const int8_t svdf_3_weights_feature[240] = {
-    -71,  -110, 122,  -115, -52,  -35,  -58, -81,  12,   -127, 88,   -67,  -114, 7,    -69,  73,  113,  -80,  -55,  -72,
-    -113, 45,   -13,  116,  -121, 38,   31,  -10,  56,   47,   -43,  11,   96,   106,  44,   -80, 48,   -37,  46,   -87,
-    -88,  -66,  63,   86,   56,   76,   37,  47,   -16,  77,   -68,  16,   -48,  42,   -107, 104, -103, 8,    -72,  -24,
-    -68,  -58,  -127, -75,  -121, 116,  69,  -42,  -32,  -38,  71,   57,   -116, -3,   -42,  -63, 10,   -3,   71,   56,
-    -24,  46,   -125, 102,  35,   -111, -79, 110,  -37,  -10,  -127, -103, -34,  44,   4,    -15, -12,  -35,  -121, -66,
-    -38,  -28,  -17,  -33,  -102, -127, -4,  50,   82,   93,   26,   88,   9,    59,   -119, 33,  26,   -117, -24,  102,
-    11,   72,   39,   -123, 20,   -29,  -53, 23,   12,   32,   64,   -3,   -81,  -54,  66,   67,  40,   -106, 126,  -3,
-    69,   67,   62,   102,  38,   104,  -78, -107, -106, 100,  64,   57,   -65,  26,   -94,  63,  -62,  28,   -80,  101,
-    -48,  -66,  -1,   -63,  1,    -85,  98,  66,   46,   -70,  50,   99,   -59,  -22,  -74,  -43, -75,  -16,  30,   90,
-    93,   68,   -53,  11,   46,   -92,  -12, -118, -45,  -62,  48,   -76,  35,   82,   73,   82,  -100, -109, 48,   62,
-    103,  110,  -5,   -18,  110,  0,    121, -70,  54,   -75,  116,  126,  16,   -32,  -126, -34, 45,   -100, 112,  -63,
-    8,    -100, 55,   -2,   -33,  -127, 97,  82,   -89,  8,    110,  -4,   -105, -110, -42,  -24, -35,  -34,  -34,  8};
+    -63,  -104, 46,   7,    97,   120, -35, 109,  78,   43,   -26,  42,   -57, -89,  -39, 111,  -31,  -16,  85,   102,
+    -16,  44,   -3,   -110, 5,    59,  -78, -89,  -29,  -94,  -19,  -23,  -95, -54,  -58, 120,  -89,  -101, 89,   -51,
+    9,    93,   -72,  98,   21,   107, 92,  116,  28,   16,   -75,  -59,  -99, 63,   -39, 107,  -76,  -110, -75,  -97,
+    -19,  44,   -70,  77,   116,  -66, 110, -35,  -86,  8,    57,   2,    9,   47,   38,  98,   22,   37,   32,   2,
+    80,   -49,  43,   58,   43,   41,  38,  -98,  -120, -120, 85,   -39,  -12, 68,   26,  -36,  -97,  93,   -34,  21,
+    -107, -3,   -114, -56,  -28,  37,  -98, 92,   105,  -120, -105, -12,  -29, -100, -56, -73,  85,   -15,  -47,  54,
+    38,   -90,  -38,  -91,  -96,  11,  126, -121, -36,  -20,  86,   -37,  114, -7,   2,   27,   27,   -128, 94,   -47,
+    75,   -34,  93,   -125, 82,   -54, 51,  92,   -91,  53,   112,  -19,  9,   -56,  0,   68,   -45,  -22,  56,   -54,
+    95,   62,   37,   89,   18,   -44, -80, 86,   11,   -82,  122,  -113, 61,  -45,  -72, -125, -111, 118,  124,  -35,
+    70,   -15,  -100, -116, -79,  -8,  -5,  -8,   -27,  37,   -8,   84,   -36, 61,   114, 123,  30,   -30,  -104, 23,
+    104,  56,   -29,  -56,  -50,  99,  22,  -127, -56,  126,  41,   103,  49,  -118, 95,  87,   -11,  0,    92,   17,
+    -1,   -37,  17,   2,    -108, -28, 14,  97,   -17,  87,   -29,  3,    0,   15,   11,  100,  -52,  -85,  125,  -122};

+ 4 - 3
Tests/UnitTest/TestCases/TestData/svdf_3/weights_time_data.h

@@ -1,6 +1,7 @@
-// Generated by generate_test_data.py using TFL version 2.6.0 as reference.
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 
-const int16_t svdf_3_weights_time[24] = {-17,  -93, -78, -65,  46,   109, -70, -68, -41, -47, 2,   -108,
-                                         -108, -42, 69,  -103, -127, 73,  100, 113, -52, 98,  -74, -14};
+const int16_t svdf_3_weights_time[24] = {39,  109, -70, -57, 109, -78, 91, -68, 24, 118, -104, 105,
+                                         -75, -87, 17,  65,  -84, 7,   -9, 88,  -4, 75,  73,   91};

+ 1 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/biases_data.h

@@ -1,5 +1,5 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 3 - 3
Tests/UnitTest/TestCases/TestData/svdf_int8/config_data.h

@@ -1,12 +1,12 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #define SVDF_INT8_MULTIPLIER_IN 1717987072
 #define SVDF_INT8_MULTIPLIER_OUT 1099511552
 #define SVDF_INT8_SHIFT_1 -3
 #define SVDF_INT8_SHIFT_2 -11
-#define SVDF_INT8_IN_ACTIVATION_MIN -32768
-#define SVDF_INT8_IN_ACTIVATION_MAX 32767
+#define SVDF_INT8_IN_ACTIVATION_MIN -128
+#define SVDF_INT8_IN_ACTIVATION_MAX 127
 #define SVDF_INT8_RANK 1
 #define SVDF_INT8_FEATURE_BATCHES 12
 #define SVDF_INT8_TIME_BATCHES 2

+ 1 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/input_sequence_data.h

@@ -1,5 +1,5 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 6 - 0
Tests/UnitTest/TestCases/TestData/svdf_int8/output_ref_data.h

@@ -0,0 +1,6 @@
+// Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
+#pragma once
+#include <stdint.h>
+
+const int8_t svdf_int8_output_ref[12] = {2, -3, -1, -2, 0, 3, 1, -7, 6, 1, -2, 3};

+ 1 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/state_data.h

@@ -1,5 +1,5 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 2 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/test_data.h

@@ -1,8 +1,9 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #include "biases_data.h"
 #include "config_data.h"
 #include "input_sequence_data.h"
+#include "output_ref_data.h"
 #include "state_data.h"
 #include "weights_feature_data.h"
 #include "weights_time_data.h"

+ 1 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/weights_feature_data.h

@@ -1,5 +1,5 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 1 - 1
Tests/UnitTest/TestCases/TestData/svdf_int8/weights_time_data.h

@@ -1,5 +1,5 @@
 // Generated by generate_test_data.py using tensorflow version 2.10.0 (Keras version 2.10.0).
-// Interpreter from tensorflow version 2.10.0 and revision v2.10.0-rc3-6-g359c3cdfc5f.
+// Interpreter from tflite_micro version 0.dev20230817002213-g3bd11ea3 and revision None.
 #pragma once
 #include <stdint.h>
 

+ 4 - 0
Tests/UnitTest/TestCases/test_arm_svdf_s8/test_arm_svdf_s8.c

@@ -26,6 +26,8 @@
 
 void svdf_int8_arm_svdf_s8(void)
 {
+    const int32_t output_ref_size = SVDF_INT8_DST_SIZE;
+    const int8_t *output_ref = svdf_int8_output_ref;
     const arm_cmsis_nn_status expected = ARM_CMSIS_NN_SUCCESS;
     cmsis_nn_context input_ctx;
     cmsis_nn_context output_ctx;
@@ -103,6 +105,8 @@ void svdf_int8_arm_svdf_s8(void)
                                                      output_data);
             TEST_ASSERT_EQUAL(expected, result);
         }
+
+        TEST_ASSERT_TRUE(validate(output_data, output_ref, output_ref_size));
     }
 
     // Make sure state data is not written outside boundary

+ 160 - 0
Tests/UnitTest/add_mul_settings.py

@@ -0,0 +1,160 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from test_settings import TestSettings
+
+import tensorflow as tf
+import numpy as np
+
+
+class AddMulSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 channels=1,
+                 x_in=4,
+                 y_in=4,
+                 decimal_input=6,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 out_activation_min=TestSettings.INT8_MIN,
+                 out_activation_max=TestSettings.INT8_MAX,
+                 int16xint8=False,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         in_ch=channels,
+                         out_ch=channels,
+                         x_in=x_in,
+                         y_in=y_in,
+                         w_x=1,
+                         w_y=1,
+                         stride_x=1,
+                         stride_y=1,
+                         pad=False,
+                         randmin=randmin,
+                         randmax=randmax,
+                         batches=1,
+                         generate_bias=False,
+                         relu6=False,
+                         out_activation_min=out_activation_min,
+                         out_activation_max=out_activation_max,
+                         int16xint8=int16xint8,
+                         interpreter=interpreter)
+
+        self.x_input = self.x_output = x_in
+        self.y_input = self.y_output = y_in
+        self.decimal_input = decimal_input
+
+        self.left_shift = 15 if self.is_int16xint8 else 20
+
+    def generate_data(self, input_data1=None, input_data2=None) -> None:
+        input_shape = (1, self.y_input, self.x_input, self.input_ch)
+
+        input_data1 = self.get_randomized_data(list(input_shape),
+                                               self.inputs_table_file,
+                                               regenerate=self.regenerate_new_input,
+                                               decimals=self.decimal_input)
+        input_data2 = self.get_randomized_data(list(input_shape),
+                                               self.kernel_table_file,
+                                               regenerate=self.regenerate_new_weights,
+                                               decimals=self.decimal_input)
+
+        if self.is_int16xint8:
+            inttype = "int16_t"
+            inttype_tf = tf.int16
+        else:
+            inttype = "int8_t"
+            inttype_tf = tf.int8
+
+        # Create a one-layer functional Keras model as add/mul cannot use a sequntial Keras model.
+        input1 = tf.keras.layers.Input(shape=input_shape[1:])
+        input2 = tf.keras.layers.Input(shape=input_shape[1:])
+        if self.test_type == 'add':
+            layer = tf.keras.layers.Add()([input1, input2])
+        elif self.test_type == 'mul':
+            layer = tf.keras.layers.Multiply()([input1, input2])
+        else:
+            raise RuntimeError("Wrong test type")
+        out = tf.keras.layers.Lambda(function=lambda x: x)(layer)
+        model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
+
+        interpreter = self.convert_and_interpret(model, inttype_tf)
+
+        input_details = interpreter.get_input_details()
+        interpreter.set_tensor(input_details[0]["index"], tf.cast(input_data1, inttype_tf))
+        interpreter.set_tensor(input_details[1]["index"], tf.cast(input_data2, inttype_tf))
+
+        # Calculate multipliers, shifts and offsets.
+        (input1_scale, self.input1_zero_point) = input_details[0]['quantization']
+        (input2_scale, self.input2_zero_point) = input_details[1]['quantization']
+        self.input1_zero_point = -self.input1_zero_point
+        self.input2_zero_point = -self.input2_zero_point
+        double_max_input_scale = max(input1_scale, input2_scale) * 2
+        (self.input1_mult, self.input1_shift) = self.quantize_scale(input1_scale / double_max_input_scale)
+        (self.input2_mult, self.input2_shift) = self.quantize_scale(input2_scale / double_max_input_scale)
+
+        if self.test_type == 'add':
+            actual_output_scale = double_max_input_scale / ((1 << self.left_shift) * self.output_scale)
+        elif self.test_type == 'mul':
+            actual_output_scale = input1_scale * input2_scale / self.output_scale
+        (self.output_mult, self.output_shift) = self.quantize_scale(actual_output_scale)
+
+        # Generate reference.
+        interpreter.invoke()
+        output_details = interpreter.get_output_details()
+        output_data = interpreter.get_tensor(output_details[0]["index"])
+        self.generate_c_array("input1", input_data1, datatype=inttype)
+        self.generate_c_array("input2", input_data2, datatype=inttype)
+        self.generate_c_array(self.output_data_file_prefix,
+                              np.clip(output_data, self.out_activation_min, self.out_activation_max),
+                              datatype=inttype)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header(write_common_parameters=False)
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            f.write("#define {}_DST_SIZE {}\n".format(prefix,
+                                                      self.batches * self.y_input * self.x_input * self.input_ch))
+            f.write("#define {}_OUT_ACTIVATION_MIN {}\n".format(prefix, self.out_activation_min))
+            f.write("#define {}_OUT_ACTIVATION_MAX {}\n".format(prefix, self.out_activation_max))
+            f.write("#define {}_INPUT1_OFFSET {}\n".format(prefix, self.input1_zero_point))
+            f.write("#define {}_INPUT2_OFFSET {}\n".format(prefix, self.input2_zero_point))
+            f.write("#define {}_OUTPUT_MULT {}\n".format(prefix, self.output_mult))
+            f.write("#define {}_OUTPUT_SHIFT {}\n".format(prefix, self.output_shift))
+            f.write("#define {}_OUTPUT_OFFSET {}\n".format(prefix, self.output_zero_point))
+            if self.test_type == 'add':
+                f.write("#define {}_LEFT_SHIFT {}\n".format(prefix, self.left_shift))
+                f.write("#define {}_INPUT1_SHIFT {}\n".format(prefix, self.input1_shift))
+                f.write("#define {}_INPUT2_SHIFT {}\n".format(prefix, self.input2_shift))
+                f.write("#define {}_INPUT1_MULT {}\n".format(prefix, self.input1_mult))
+                f.write("#define {}_INPUT2_MULT {}\n".format(prefix, self.input2_mult))

+ 204 - 0
Tests/UnitTest/conv_settings.py

@@ -0,0 +1,204 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from test_settings import TestSettings
+
+import tensorflow as tf
+import numpy as np
+
+
+class ConvSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 in_ch=1,
+                 out_ch=1,
+                 x_in=7,
+                 y_in=7,
+                 w_x=3,
+                 w_y=3,
+                 stride_x=2,
+                 stride_y=2,
+                 pad=True,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 batches=1,
+                 generate_bias=True,
+                 relu6=False,
+                 out_activation_min=None,
+                 out_activation_max=None,
+                 int16xint8=False,
+                 bias_min=TestSettings.INT32_MIN,
+                 bias_max=TestSettings.INT32_MAX,
+                 dilation_x=1,
+                 dilation_y=1,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         in_ch,
+                         out_ch,
+                         x_in,
+                         y_in,
+                         w_x,
+                         w_y,
+                         stride_x,
+                         stride_y,
+                         pad,
+                         randmin,
+                         randmax,
+                         batches,
+                         generate_bias=generate_bias,
+                         relu6=relu6,
+                         out_activation_min=out_activation_min,
+                         out_activation_max=out_activation_max,
+                         int16xint8=int16xint8,
+                         bias_min=bias_min,
+                         bias_max=bias_max,
+                         dilation_x=dilation_x,
+                         dilation_y=dilation_y,
+                         interpreter=interpreter)
+
+        self.scaling_factors = []
+
+        if self.test_type == 'depthwise_conv':
+            self.channel_multiplier = self.output_ch // self.input_ch
+            if self.output_ch % self.input_ch != 0:
+                raise RuntimeError("out channel ({}) is not multiple of in channel ({})".format(out_ch, in_ch))
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header()
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            self.write_common_config(f, prefix)
+            if self.test_type == 'depthwise_conv':
+                f.write("#define {}_CH_MULT {}\n".format(prefix, self.channel_multiplier))
+            f.write("#define {}_INPUT_OFFSET {}\n".format(prefix, -self.input_zero_point))
+            f.write("#define {}_OUTPUT_OFFSET {}\n".format(prefix, self.output_zero_point))
+            f.write("#define {}_DILATION_X {}\n".format(prefix, self.dilation_x))
+            f.write("#define {}_DILATION_Y {}\n".format(prefix, self.dilation_y))
+
+    def generate_quantize_per_channel_multiplier(self):
+        num_channels = self.output_ch
+        per_channel_multiplier = []
+        per_channel_shift = []
+
+        if len(self.scaling_factors) != num_channels:
+            raise RuntimeError("Missing scaling factors")
+
+        for i in range(num_channels):
+            effective_output_scale = self.input_scale * self.scaling_factors[i] / self.output_scale
+            (quantized_multiplier, shift) = self.quantize_scale(effective_output_scale)
+
+            per_channel_multiplier.append(quantized_multiplier)
+            per_channel_shift.append(shift)
+
+        return per_channel_multiplier, per_channel_shift
+
+    def generate_data(self, input_data=None, weights=None, biases=None) -> None:
+        if self.is_int16xint8:
+            inttype = tf.int16
+            datatype = "int16_t"
+            bias_datatype = "int64_t"
+        else:
+            inttype = tf.int8
+            datatype = "int8_t"
+            bias_datatype = "int32_t"
+
+        input_data = self.get_randomized_input_data(input_data)
+
+        if self.test_type == 'conv':
+            out_channel = self.output_ch
+        elif self.test_type == 'depthwise_conv':
+            out_channel = self.channel_multiplier
+
+        if weights is not None:
+            weights = tf.reshape(weights, [self.filter_y, self.filter_x, self.input_ch, out_channel])
+        else:
+            weights = self.get_randomized_data([self.filter_y, self.filter_x, self.input_ch, out_channel],
+                                               self.kernel_table_file,
+                                               minrange=TestSettings.INT32_MIN,
+                                               maxrange=TestSettings.INT32_MAX,
+                                               decimals=1,
+                                               regenerate=self.regenerate_new_weights)
+
+        biases = self.get_randomized_bias_data(biases)
+
+        # Create a one layer Keras model.
+        model = tf.keras.models.Sequential()
+        input_shape = (self.batches, self.y_input, self.x_input, self.input_ch)
+        model.add(tf.keras.layers.InputLayer(input_shape=input_shape[1:], batch_size=self.batches))
+        if self.test_type == 'conv':
+            conv_layer = tf.keras.layers.Conv2D(self.output_ch,
+                                                kernel_size=(self.filter_y, self.filter_x),
+                                                strides=(self.stride_y, self.stride_x),
+                                                padding=self.padding,
+                                                input_shape=input_shape[1:],
+                                                dilation_rate=(self.dilation_y, self.dilation_x))
+            model.add(conv_layer)
+            conv_layer.set_weights([weights, biases])
+        elif self.test_type == 'depthwise_conv':
+            depthwise_layer = tf.keras.layers.DepthwiseConv2D(kernel_size=(self.filter_y, self.filter_x),
+                                                              strides=(self.stride_y, self.stride_x),
+                                                              padding=self.padding,
+                                                              depth_multiplier=self.channel_multiplier,
+                                                              input_shape=input_shape[1:],
+                                                              dilation_rate=(self.dilation_y, self.dilation_x))
+            model.add(depthwise_layer)
+            depthwise_layer.set_weights([weights, biases])
+        interpreter = self.convert_and_interpret(model, inttype, input_data)
+
+        all_layers_details = interpreter.get_tensor_details()
+        filter_layer = all_layers_details[2]
+        bias_layer = all_layers_details[1]
+        if weights.numpy().size != interpreter.get_tensor(filter_layer['index']).size or \
+           (self.generate_bias and biases.numpy().size != interpreter.get_tensor(bias_layer['index']).size):
+            raise RuntimeError(f"Dimension mismatch for {self.testdataset}")
+
+        output_details = interpreter.get_output_details()
+        self.set_output_dims_and_padding(output_details[0]['shape'][2], output_details[0]['shape'][1])
+
+        self.generate_c_array(self.input_data_file_prefix, input_data, datatype=datatype)
+        self.generate_c_array(self.weight_data_file_prefix, interpreter.get_tensor(filter_layer['index']))
+
+        self.scaling_factors = filter_layer['quantization_parameters']['scales']
+        per_channel_multiplier, per_channel_shift = self.generate_quantize_per_channel_multiplier()
+        self.generate_c_array("output_mult", per_channel_multiplier, datatype='int32_t')
+        self.generate_c_array("output_shift", per_channel_shift, datatype='int32_t')
+
+        self.generate_c_array(self.bias_data_file_prefix, interpreter.get_tensor(bias_layer['index']), bias_datatype)
+
+        # Generate reference
+        interpreter.invoke()
+        output_data = interpreter.get_tensor(output_details[0]["index"])
+        self.generate_c_array(self.output_data_file_prefix,
+                              np.clip(output_data, self.out_activation_min, self.out_activation_max),
+                              datatype=datatype)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()

+ 173 - 0
Tests/UnitTest/fully_connected_settings.py

@@ -0,0 +1,173 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from test_settings import TestSettings
+
+import tensorflow as tf
+import numpy as np
+
+
+class FullyConnectedSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 in_ch=1,
+                 out_ch=1,
+                 x_in=1,
+                 y_in=1,
+                 w_x=1,
+                 w_y=1,
+                 stride_x=1,
+                 stride_y=1,
+                 pad=False,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 batches=1,
+                 generate_bias=True,
+                 out_activation_min=None,
+                 out_activation_max=None,
+                 int16xint8=False,
+                 bias_min=TestSettings.INT32_MIN,
+                 bias_max=TestSettings.INT32_MAX,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         in_ch,
+                         out_ch,
+                         x_in,
+                         y_in,
+                         x_in,
+                         y_in,
+                         stride_x,
+                         stride_y,
+                         pad,
+                         randmin,
+                         randmax,
+                         batches,
+                         generate_bias=generate_bias,
+                         out_activation_min=out_activation_min,
+                         out_activation_max=out_activation_max,
+                         int16xint8=int16xint8,
+                         bias_min=bias_min,
+                         bias_max=bias_max,
+                         interpreter=interpreter)
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header()
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            f.write("#define {}_OUTPUT_MULTIPLIER {}\n".format(prefix, self.quantized_multiplier))
+            f.write("#define {}_OUTPUT_SHIFT {}\n".format(prefix, self.quantized_shift))
+            f.write("#define {}_ACCUMULATION_DEPTH {}\n".format(prefix, self.input_ch * self.x_input * self.y_input))
+            f.write("#define {}_INPUT_OFFSET {}\n".format(prefix, -self.input_zero_point))
+            f.write("#define {}_OUTPUT_OFFSET {}\n".format(prefix, self.output_zero_point))
+
+    def quantize_multiplier(self):
+        input_product_scale = self.input_scale * self.weights_scale
+        if input_product_scale < 0:
+            raise RuntimeError("negative input product scale")
+        real_multipler = input_product_scale / self.output_scale
+        (self.quantized_multiplier, self.quantized_shift) = self.quantize_scale(real_multipler)
+
+    def generate_data(self, input_data=None, weights=None, biases=None) -> None:
+        input_data = self.get_randomized_input_data(input_data,
+                                                    [self.batches, self.input_ch * self.x_input * self.y_input])
+
+        if self.is_int16xint8:
+            inttype = tf.int16
+            datatype = "int16_t"
+            bias_datatype = "int64_t"
+        else:
+            inttype = tf.int8
+            datatype = "int8_t"
+            bias_datatype = "int32_t"
+
+        fc_weights_format = [self.input_ch * self.y_input * self.x_input, self.output_ch]
+
+        if weights is not None:
+            weights = tf.reshape(weights, fc_weights_format)
+        else:
+            weights = self.get_randomized_data(fc_weights_format,
+                                               self.kernel_table_file,
+                                               minrange=TestSettings.INT32_MIN,
+                                               maxrange=TestSettings.INT32_MAX,
+                                               regenerate=self.regenerate_new_weights)
+
+        biases = self.get_randomized_bias_data(biases)
+
+        # Create model with one fully_connected layer.
+        model = tf.keras.models.Sequential()
+        model.add(
+            tf.keras.layers.InputLayer(input_shape=(self.y_input * self.x_input * self.input_ch, ),
+                                       batch_size=self.batches))
+        fully_connected_layer = tf.keras.layers.Dense(self.output_ch, activation=None)
+        model.add(fully_connected_layer)
+        fully_connected_layer.set_weights([weights, biases])
+
+        interpreter = self.convert_and_interpret(model, inttype, input_data)
+
+        all_layers_details = interpreter.get_tensor_details()
+        if self.generate_bias:
+            filter_layer = all_layers_details[2]
+            bias_layer = all_layers_details[1]
+        else:
+            filter_layer = all_layers_details[1]
+        if weights.numpy().size != interpreter.get_tensor(filter_layer['index']).size or \
+           (self.generate_bias and biases.numpy().size != interpreter.get_tensor(bias_layer['index']).size):
+            raise RuntimeError(f"Dimension mismatch for {self.testdataset}")
+
+        # The generic destination size calculation for these tests are: self.x_output * self.y_output * self.output_ch
+        # * self.batches.
+        self.x_output = 1
+        self.y_output = 1
+        output_details = interpreter.get_output_details()
+        if self.output_ch != output_details[0]['shape'][1] or self.batches != output_details[0]['shape'][0]:
+            raise RuntimeError("Fully connected out dimension mismatch")
+
+        self.weights_scale = filter_layer['quantization_parameters']['scales'][0]
+        self.quantize_multiplier()
+
+        self.generate_c_array(self.input_data_file_prefix, input_data, datatype=datatype)
+        self.generate_c_array(self.weight_data_file_prefix, interpreter.get_tensor(filter_layer['index']))
+
+        if self.generate_bias:
+            self.generate_c_array(self.bias_data_file_prefix, interpreter.get_tensor(bias_layer['index']),
+                                  bias_datatype)
+        else:
+            self.generate_c_array(self.bias_data_file_prefix, biases, bias_datatype)
+
+        # Generate reference
+        interpreter.invoke()
+        output_data = interpreter.get_tensor(output_details[0]["index"])
+        self.generate_c_array(self.output_data_file_prefix,
+                              np.clip(output_data, self.out_activation_min, self.out_activation_max),
+                              datatype=datatype)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()

Разница между файлами не показана из-за своего большого размера
+ 14 - 1850
Tests/UnitTest/generate_test_data.py


+ 409 - 0
Tests/UnitTest/lstm_settings.py

@@ -0,0 +1,409 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+import math
+from test_settings import TestSettings
+
+import tensorflow as tf
+import numpy as np
+
+
+class LSTMSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 batches=2,
+                 time_steps=2,
+                 number_inputs=3,
+                 number_units=4,
+                 time_major=True,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 generate_bias=True,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         False,
+                         randmin,
+                         randmax,
+                         generate_bias=generate_bias,
+                         interpreter=interpreter)
+
+        self.batches = batches
+        self.time_steps = time_steps
+        self.number_units = number_units
+        self.number_inputs = number_inputs
+
+        self.kernel_hidden_table_file = self.pregenerated_data_dir + self.testdataset + '/' + 'kernel_hidden.txt'
+
+        self.time_major = time_major
+
+        self.in_activation_max = TestSettings.INT16_MAX
+        self.in_activation_min = TestSettings.INT16_MIN
+
+        self.lstm_scales = []
+
+        # Layer indexes. Works with tensorflow 2.10 and 2.11.
+        self.output_gate_bias_index = 1
+        self.cell_gate_bias_index = 2
+        self.forget_gate_bias_index = 3
+        self.input_gate_bias_index = 4
+        self.recurrent_input_to_output_w_index = 5
+        self.recurrent_input_to_cell_w_index = 6
+        self.recurrent_input_to_forget_w_index = 7
+        self.recurrent_input_to_input_w_index = 8
+        self.input_to_output_w_index = 9
+        self.input_to_cell_w_index = 10
+        self.input_to_forget_w_index = 11
+        self.input_to_input_w_index = 12
+        self.output_state_index = 13
+        self.cell_state_index = 14
+        self.input_norm_coeff_index = 15
+        self.forget_norm_coeff_index = 16
+        self.cell_norm_coeff_index = 17
+        self.output_norm_coeff_index = 18
+        self.effective_hidden_scale_intermediate_index = 20
+
+    def generate_data(self, input_data=None, weights=None, hidden_weights=None, biases=None) -> None:
+
+        input_dims = [self.batches, self.time_steps, self.number_inputs]
+        if input_data is not None:
+            input_data = tf.reshape(input_data, input_dims)
+        else:
+            input_data = self.get_randomized_data(input_dims,
+                                                  self.inputs_table_file,
+                                                  regenerate=self.regenerate_new_input)
+
+        # This will be the same size when there is no projection.
+        number_cells = self.number_units
+
+        # Each LSTM cell has 4 input weights, 4 hidden (recurrent or cell state) weights and 4 biases.
+        number_w_b = 4
+
+        if weights is not None:
+            weights = tf.reshape(weights, [self.number_inputs, number_cells * number_w_b])
+        else:
+            weights = self.get_randomized_data([self.number_inputs, number_cells * number_w_b],
+                                               self.kernel_table_file,
+                                               regenerate=self.regenerate_new_weights,
+                                               decimals=8,
+                                               minrange=-1.0,
+                                               maxrange=1.0)
+
+        if hidden_weights is not None:
+            hidden_weights = tf.reshape(hidden_weights, [number_cells, number_cells * number_w_b])
+        else:
+            hidden_weights = self.get_randomized_data([number_cells, number_cells * number_w_b],
+                                                      self.kernel_hidden_table_file,
+                                                      regenerate=self.regenerate_new_weights,
+                                                      decimals=8,
+                                                      minrange=-1.0,
+                                                      maxrange=1.0)
+        if not self.generate_bias:
+            biases = [0] * number_cells * number_w_b
+        if biases is not None:
+            biases = tf.reshape(biases, [number_cells * number_w_b])
+        else:
+            biases = self.get_randomized_data([number_cells * number_w_b],
+                                              self.bias_table_file,
+                                              regenerate=self.regenerate_new_bias,
+                                              decimals=8,
+                                              minrange=-1.0,
+                                              maxrange=1.0)
+
+        # Create a Keras based LSTM model.
+        input_layer = tf.keras.layers.Input(shape=(self.time_steps, self.number_inputs),
+                                            batch_size=self.batches,
+                                            name='input')
+        if self.time_major:
+            input_layer_transposed = tf.transpose(input_layer, perm=[1, 0, 2])
+            lstm_layer = tf.keras.layers.LSTM(units=self.number_units,
+                                              time_major=self.time_major,
+                                              return_sequences=True)(input_layer_transposed)
+        else:
+            lstm_layer = tf.keras.layers.LSTM(units=self.number_units,
+                                              time_major=self.time_major,
+                                              return_sequences=True)(input_layer)
+        model = tf.keras.Model(input_layer, lstm_layer, name="LSTM")
+
+        if self.time_major:
+            time_major_offset = 1
+            shape = (self.time_steps, self.batches, self.number_inputs)
+        else:
+            time_major_offset = 0
+            shape = (self.batches, self.time_steps, self.number_inputs)
+
+        # Writing weight and bias to model.
+        print("Updating weights", model.layers[1 + time_major_offset].weights[0].name)
+        model.layers[1 + time_major_offset].weights[0].assign(weights)
+        print("Updating hidden weights", model.layers[1 + time_major_offset].weights[1].name)
+        model.layers[1 + time_major_offset].weights[1].assign(hidden_weights)
+        print("Updating bias", model.layers[1 + time_major_offset].weights[2].name)
+        model.layers[1 + time_major_offset].weights[2].assign(biases)
+
+        interpreter = self.convert_and_interpret(model, tf.int8, input_data, dataset_shape=shape)
+
+        all_layers_details = interpreter.get_tensor_details()
+
+        for i in all_layers_details:
+            self.lstm_scales.append(i['quantization_parameters']['scales'])
+
+        input_data_for_index = all_layers_details[0]
+
+        input_gate_bias = all_layers_details[self.input_gate_bias_index + time_major_offset]
+        forget_gate_bias = all_layers_details[self.forget_gate_bias_index + time_major_offset]
+        cell_gate_bias = all_layers_details[self.cell_gate_bias_index + time_major_offset]
+        output_gate_bias = all_layers_details[self.output_gate_bias_index + time_major_offset]
+
+        input_to_input_w = all_layers_details[self.input_to_input_w_index + time_major_offset]
+        input_to_forget_w = all_layers_details[self.input_to_forget_w_index + time_major_offset]
+        input_to_cell_w = all_layers_details[self.input_to_cell_w_index + time_major_offset]
+        input_to_output_w = all_layers_details[self.input_to_output_w_index + time_major_offset]
+
+        recurrent_input_to_input_w = all_layers_details[self.recurrent_input_to_input_w_index + time_major_offset]
+        recurrent_input_to_forget_w = all_layers_details[self.recurrent_input_to_forget_w_index + time_major_offset]
+        recurrent_input_to_cell_w = all_layers_details[self.recurrent_input_to_cell_w_index + time_major_offset]
+        recurrent_input_to_output_w = all_layers_details[self.recurrent_input_to_output_w_index + time_major_offset]
+
+        if self.time_major:
+            time_major_offset = 2
+
+        output_state = all_layers_details[self.output_state_index + time_major_offset]
+        cell_state = all_layers_details[self.cell_state_index + time_major_offset]
+
+        input_norm_coeff = all_layers_details[self.input_norm_coeff_index + time_major_offset]
+        forget_norm_coeff = all_layers_details[self.forget_norm_coeff_index + time_major_offset]
+        cell_norm_coeff = all_layers_details[self.cell_norm_coeff_index + time_major_offset]
+        output_norm_coeff = all_layers_details[self.output_norm_coeff_index + time_major_offset]
+
+        # For scale and zero point.
+        effective_hidden_scale_intermediate = all_layers_details[
+            self.effective_hidden_scale_intermediate_index + time_major_offset]
+
+        input_details = interpreter.get_input_details()
+        output_details = interpreter.get_output_details()
+        actual_input_data = interpreter.get_tensor(input_details[0]["index"])
+        if (input_data.numpy().shape != actual_input_data.shape) or \
+           not ((input_data.numpy().astype(int) == actual_input_data).all().astype(int)):
+            raise RuntimeError("Input data mismatch")
+
+        self.generate_c_array(self.input_data_file_prefix, interpreter.get_tensor(input_data_for_index['index']))
+        self.generate_c_array("input_to_input_w", interpreter.get_tensor(input_to_input_w['index']))
+        self.generate_c_array("input_to_forget_w", interpreter.get_tensor(input_to_forget_w['index']))
+        self.generate_c_array("input_to_cell_w", interpreter.get_tensor(input_to_cell_w['index']))
+        self.generate_c_array("input_to_output_w", interpreter.get_tensor(input_to_output_w['index']))
+        self.generate_c_array("recurrent_input_to_input_w", interpreter.get_tensor(recurrent_input_to_input_w['index']))
+        self.generate_c_array("recurrent_input_to_forget_w",
+                              interpreter.get_tensor(recurrent_input_to_forget_w['index']))
+        self.generate_c_array("recurrent_input_to_cell_w", interpreter.get_tensor(recurrent_input_to_cell_w['index']))
+        self.generate_c_array("recurrent_input_to_output_w",
+                              interpreter.get_tensor(recurrent_input_to_output_w['index']))
+
+        # Peephole not supported so these are nullptrs.
+        self.generate_c_array("cell_to_input", [], datatype='int16_t')
+        self.generate_c_array("cell_to_forget", [], datatype='int16_t')
+        self.generate_c_array("cell_to_output", [], datatype='int16_t')
+
+        self.generate_c_array("input_gate_bias", interpreter.get_tensor(input_gate_bias['index']), datatype='int32_t')
+        self.generate_c_array("cell_gate_bias", interpreter.get_tensor(cell_gate_bias['index']), datatype='int32_t')
+        self.generate_c_array("forget_gate_bias", interpreter.get_tensor(forget_gate_bias['index']), datatype='int32_t')
+        self.generate_c_array("output_gate_bias", interpreter.get_tensor(output_gate_bias['index']), datatype='int32_t')
+
+        # Projection not supported so these are nullptrs.
+        self.generate_c_array("projection_weights", [])
+        self.generate_c_array("projection_bias", [], datatype='int32_t')
+
+        self.generate_c_array("output_state", interpreter.get_tensor(output_state['index']), const="")
+        self.generate_c_array("cell_state", interpreter.get_tensor(cell_state['index']), datatype='int16_t', const="")
+
+        self.generate_c_array("input_norm_coeff", interpreter.get_tensor(input_norm_coeff['index']))
+        self.generate_c_array("forget_norm_coeff", interpreter.get_tensor(forget_norm_coeff['index']))
+        self.generate_c_array("cell_norm_coeff", interpreter.get_tensor(cell_norm_coeff['index']))
+        self.generate_c_array("output_norm_coeff", interpreter.get_tensor(output_norm_coeff['index']))
+
+        input_scale = input_data_for_index['quantization_parameters']['scales'][0]
+        cell_scale = cell_state['quantization_parameters']['scales'][0]
+        output_state_scale = output_state['quantization_parameters']['scales'][0]
+        input_zp = input_data_for_index['quantization_parameters']['zero_points'][0]
+        output_zp = output_details[0]['quantization_parameters']['zero_points'][0]
+        output_state_zp = output_state['quantization_parameters']['zero_points'][0]
+        self.hidden_zp = effective_hidden_scale_intermediate['quantization_parameters']['zero_points'][0]
+        self.output_state_offset = output_state_zp
+
+        tmp = math.log(cell_scale) * (1 / math.log(2))
+        self.cell_state_shift = int(round(tmp))
+
+        self.calc_scales(input_scale, output_state_scale)
+
+        # Calculate effective biases.
+        input_zp = -input_zp
+        output_zp = -output_zp
+        output_state_zp = -output_state_zp
+        input_to_forget_eff_bias = self.calc_effective_bias(interpreter, input_zp, input_to_forget_w, forget_gate_bias)
+        recurrent_to_forget_eff_bias = self.calc_effective_bias(interpreter, output_state_zp,
+                                                                recurrent_input_to_forget_w, None, False)
+        input_to_cell_eff_bias = self.calc_effective_bias(interpreter, input_zp, input_to_cell_w, cell_gate_bias)
+        recurrent_to_cell_eff_bias = self.calc_effective_bias(interpreter, output_state_zp, recurrent_input_to_cell_w,
+                                                              None, False)
+        input_to_output_eff_bias = self.calc_effective_bias(interpreter, input_zp, input_to_output_w, output_gate_bias)
+        recurrent_to_output_eff_bias = self.calc_effective_bias(interpreter, output_state_zp,
+                                                                recurrent_input_to_output_w, None, False)
+        input_to_input_eff_bias = self.calc_effective_bias(interpreter, input_zp, input_to_input_w, input_gate_bias)
+
+        recurrent_to_input_eff_bias = self.calc_effective_bias(interpreter, output_state_zp, recurrent_input_to_input_w,
+                                                               None, False)
+
+        self.generate_c_array("input_to_input_eff_bias", input_to_input_eff_bias, datatype='int32_t')
+        self.generate_c_array("input_to_forget_eff_bias", input_to_forget_eff_bias, datatype='int32_t')
+        self.generate_c_array("input_to_cell_eff_bias", input_to_cell_eff_bias, datatype='int32_t')
+        self.generate_c_array("input_to_output_eff_bias", input_to_output_eff_bias, datatype='int32_t')
+        self.generate_c_array("recurrent_to_input_eff_bias", recurrent_to_input_eff_bias, datatype='int32_t')
+        self.generate_c_array("recurrent_to_cell_eff_bias", recurrent_to_cell_eff_bias, datatype='int32_t')
+        self.generate_c_array("recurrent_to_forget_eff_bias", recurrent_to_forget_eff_bias, datatype='int32_t')
+        self.generate_c_array("recurrent_to_output_eff_bias", recurrent_to_output_eff_bias, datatype='int32_t')
+
+        # Generate reference
+        interpreter.invoke()
+        output_data = interpreter.get_tensor(output_details[0]["index"])
+        self.generate_c_array(self.output_data_file_prefix, output_data, datatype='int8_t')
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()
+
+    def calc_scales(self, input_scale, output_state_scale):
+        intermediate_scale = pow(2, -12)
+
+        if self.time_major:
+            time_major_offset = 1
+        else:
+            time_major_offset = 0
+
+        self.effective_hidden_scale = pow(2, -15) / output_state_scale * pow(2, -15)
+
+        self.i2i_effective_scale = input_scale * self.lstm_scales[self.input_to_input_w_index + time_major_offset][0] \
+            / intermediate_scale
+        self.i2f_effective_scale = input_scale * self.lstm_scales[self.input_to_forget_w_index + time_major_offset][0] \
+            / intermediate_scale
+        self.i2c_effective_scale = input_scale * self.lstm_scales[self.input_to_cell_w_index + time_major_offset][0] \
+            / intermediate_scale
+        self.i2o_effective_scale = input_scale * self.lstm_scales[self.input_to_output_w_index + time_major_offset][0] \
+            / intermediate_scale
+
+        self.r2i_effective_scale = output_state_scale * self.lstm_scales[self.recurrent_input_to_input_w_index +
+                                                                         time_major_offset][0] / intermediate_scale
+        self.r2f_effective_scale = output_state_scale * self.lstm_scales[self.recurrent_input_to_forget_w_index +
+                                                                         time_major_offset][0] / intermediate_scale
+        self.r2c_effective_scale = output_state_scale * self.lstm_scales[self.recurrent_input_to_cell_w_index +
+                                                                         time_major_offset][0] / intermediate_scale
+        self.r2o_effective_scale = output_state_scale * self.lstm_scales[self.recurrent_input_to_output_w_index +
+                                                                         time_major_offset][0] / intermediate_scale
+
+    def calc_effective_bias(self, interpreter, zero_point, weight_tensor, bias_tensor, has_bias=True) -> list:
+
+        weights = interpreter.get_tensor(weight_tensor['index'])
+        dims = weight_tensor['shape']
+        row = dims[0]
+        col = dims[1]
+
+        if has_bias:
+            bias_data = interpreter.get_tensor(bias_tensor['index'])
+            output = bias_data
+        else:
+            output = np.zeros((row, ), dtype=np.int32)
+
+        for i_row in range(row):
+            row_sum = 0
+            for i_col in range(col):
+                row_sum = row_sum + weights[i_row][i_col]
+            output[i_row] = output[i_row] + row_sum * zero_point
+
+        return output
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header(write_common_parameters=False)
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            f.write("#define {}_BUFFER_SIZE {}\n".format(prefix, self.batches * self.number_units))
+            f.write("#define {}_INPUT_BATCHES {}\n".format(prefix, self.batches))
+            f.write("#define {}_DST_SIZE {}\n".format(prefix, self.batches * self.time_steps * self.number_units))
+            f.write("#define {}_TIME_STEPS {}\n".format(prefix, self.time_steps))
+            f.write("#define {}_NUMBER_UNITS {}\n".format(prefix, self.number_units))
+            f.write("#define {}_NUMBER_INPUTS {}\n".format(prefix, self.number_inputs))
+            f.write("#define {}_TIME_MAJOR {}\n".format(prefix, int(self.time_major)))
+            f.write("#define {}_IN_ACTIVATION_MIN {}\n".format(prefix, self.in_activation_min))
+            f.write("#define {}_IN_ACTIVATION_MAX {}\n".format(prefix, self.in_activation_max))
+
+            (multiplier, shift) = self.quantize_scale(self.i2i_effective_scale)
+            f.write("#define {}_IN_TO_INPUT_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_IN_TO_INPUT_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.i2f_effective_scale)
+            f.write("#define {}_IN_TO_FORGET_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_IN_TO_FORGET_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.i2c_effective_scale)
+            f.write("#define {}_IN_TO_CELL_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_IN_TO_CELL_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.i2o_effective_scale)
+            f.write("#define {}_IN_TO_OUTPUT_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_IN_TO_OUTPUT_SHIFT {}\n".format(prefix, shift))
+
+            (multiplier, shift) = self.quantize_scale(self.r2i_effective_scale)
+            f.write("#define {}_RECURRENT_TO_INPUT_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_RECURRENT_TO_INPUT_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.r2f_effective_scale)
+            f.write("#define {}_RECURRENT_TO_FORGET_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_RECURRENT_TO_FORGET_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.r2c_effective_scale)
+            f.write("#define {}_RECURRENT_TO_CELL_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_RECURRENT_TO_CELL_SHIFT {}\n".format(prefix, shift))
+            (multiplier, shift) = self.quantize_scale(self.r2o_effective_scale)
+            f.write("#define {}_RECURRENT_TO_OUTPUT_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_RECURRENT_TO_OUTPUT_SHIFT {}\n".format(prefix, shift))
+
+            (multiplier, shift) = self.quantize_scale(self.effective_hidden_scale)
+            f.write("#define {}_HIDDEN_MULTIPLIER {}\n".format(prefix, multiplier))
+            f.write("#define {}_HIDDEN_SHIFT {}\n".format(prefix, shift))
+
+            f.write("#define {}_HIDDEN_OFFSET {}\n".format(prefix, self.hidden_zp))
+
+            f.write("#define {}_OUTPUT_STATE_OFFSET {}\n".format(prefix, self.output_state_offset))
+            f.write("#define {}_CELL_STATE_SHIFT {}\n".format(prefix, self.cell_state_shift))
+
+            for i in range(len(self.lstm_scales)):
+                if len(self.lstm_scales[i]) == 0:
+                    continue
+                (multiplier, shift) = self.quantize_scale(self.lstm_scales[i][0])
+

+ 6 - 5
Tests/UnitTest/model_extractor.py

@@ -25,7 +25,9 @@ import subprocess
 import numpy as np
 import tensorflow as tf
 
-from generate_test_data import SoftmaxSettings, FullyConnectedSettings, ConvSettings, Interpreter, OpResolverType
+from conv_settings import ConvSettings
+from softmax_settings import SoftmaxSettings
+from fully_connected_settings import FullyConnectedSettings
 
 
 class MODEL_EXTRACTOR(SoftmaxSettings, FullyConnectedSettings, ConvSettings):
@@ -181,7 +183,6 @@ class MODEL_EXTRACTOR(SoftmaxSettings, FullyConnectedSettings, ConvSettings):
                 builtin_name = operator_codes[op['opcode_index']]['builtin_code']
             else:
                 builtin_name = ""
-            #op_name = 'layer_' + str(op_index) + '_' + builtin_name
 
             # Get stride and padding.
             if 'builtin_options' in op:
@@ -271,8 +272,8 @@ class MODEL_EXTRACTOR(SoftmaxSettings, FullyConnectedSettings, ConvSettings):
 
     def generate_data(self, input_data=None, weights=None, biases=None) -> None:
 
-        interpreter = Interpreter(model_path=str(self.tflite_model),
-                                  experimental_op_resolver_type=OpResolverType.BUILTIN_REF)
+        interpreter = self.Interpreter(model_path=str(self.tflite_model),
+                                       experimental_op_resolver_type=self.OpResolverType.BUILTIN_REF)
         interpreter.allocate_tensors()
 
         # Needed for input/output scale/zp as equivalant json file data has too low precision.
@@ -283,7 +284,7 @@ class MODEL_EXTRACTOR(SoftmaxSettings, FullyConnectedSettings, ConvSettings):
 
         input_details = interpreter.get_input_details()
         if len(input_details) != 1:
-            raise RuntimeError(f"Only single input supported.")
+            raise RuntimeError("Only single input supported.")
         input_shape = input_details[0]['shape']
         input_data = self.get_randomized_input_data(input_data, input_shape)
         interpreter.set_tensor(input_details[0]["index"], tf.cast(input_data, tf.int8))

+ 128 - 0
Tests/UnitTest/pooling_settings.py

@@ -0,0 +1,128 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from test_settings import TestSettings
+
+import numpy as np
+import tensorflow as tf
+
+
+class PoolingSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 channels=8,
+                 x_in=4,
+                 y_in=4,
+                 w_x=4,
+                 w_y=4,
+                 stride_x=1,
+                 stride_y=1,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 bias_min=TestSettings.INT32_MIN,
+                 bias_max=TestSettings.INT32_MAX,
+                 batches=1,
+                 pad=False,
+                 relu6=False,
+                 out_activation_min=None,
+                 out_activation_max=None,
+                 int16xint8=False,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         channels,
+                         channels,
+                         x_in,
+                         y_in,
+                         w_x,
+                         w_y,
+                         stride_x,
+                         stride_y,
+                         pad,
+                         randmin=randmin,
+                         randmax=randmax,
+                         relu6=relu6,
+                         out_activation_min=out_activation_min,
+                         out_activation_max=out_activation_max,
+                         int16xint8=int16xint8,
+                         interpreter=interpreter)
+
+    def generate_data(self, input_data=None) -> None:
+        if self.is_int16xint8:
+            datatype = "int16_t"
+            inttype = tf.int16
+        else:
+            datatype = "int8_t"
+            inttype = tf.int8
+
+        input_data = self.get_randomized_input_data(input_data)
+        self.generate_c_array(self.input_data_file_prefix, input_data, datatype=datatype)
+
+        input_data = tf.cast(input_data, tf.float32)
+
+        # Create a one-layer Keras model
+        model = tf.keras.models.Sequential()
+        input_shape = (self.batches, self.y_input, self.x_input, self.input_ch)
+        model.add(tf.keras.layers.InputLayer(input_shape=input_shape[1:], batch_size=self.batches))
+        if self.test_type == 'avgpool':
+            model.add(
+                tf.keras.layers.AveragePooling2D(pool_size=(self.filter_y, self.filter_x),
+                                                 strides=(self.stride_y, self.stride_x),
+                                                 padding=self.padding,
+                                                 input_shape=input_shape[1:]))
+        elif self.test_type == 'maxpool':
+            model.add(
+                tf.keras.layers.MaxPooling2D(pool_size=(self.filter_y, self.filter_x),
+                                             strides=(self.stride_y, self.stride_x),
+                                             padding=self.padding,
+                                             input_shape=input_shape[1:]))
+        else:
+            raise RuntimeError("Wrong test type")
+
+        interpreter = self.convert_and_interpret(model, inttype, input_data)
+
+        output_details = interpreter.get_output_details()
+        self.set_output_dims_and_padding(output_details[0]['shape'][2], output_details[0]['shape'][1])
+
+        # Generate reference
+        interpreter.invoke()
+        output_data = interpreter.get_tensor(output_details[0]["index"])
+        self.generate_c_array(self.output_data_file_prefix,
+                              np.clip(output_data, self.out_activation_min, self.out_activation_max),
+                              datatype=datatype)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header()
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            self.write_common_config(f, prefix)

+ 163 - 0
Tests/UnitTest/softmax_settings.py

@@ -0,0 +1,163 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+import math
+from test_settings import TestSettings
+import tensorflow as tf
+
+
+class SoftmaxSettings(TestSettings):
+    softmax_input_integer_bits = 5
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 x_in=5,
+                 y_in=1,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 int16xint8=False,
+                 inInt8outInt16=False,
+                 input_scale=0.003922,
+                 input_zp=-128,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         1,
+                         1,
+                         x_in,
+                         y_in,
+                         1,
+                         1,
+                         1,
+                         1,
+                         False,
+                         randmin,
+                         randmax,
+                         int16xint8=int16xint8,
+                         interpreter=interpreter)
+        self.x_input = self.x_output = x_in
+        self.y_input = self.y_output = y_in
+        self.inInt8outInt16 = inInt8outInt16
+
+        if self.inInt8outInt16 and self.is_int16xint8:
+            raise RuntimeError("Specify input as either s8 or s16")
+
+        if self.inInt8outInt16:
+            self.input_scale = input_scale
+            self.json_template = "TestCases/Common/Softmax/softmax_int8_to_int16_template.json"
+            self.json_replacements = {
+                "num_rows": self.y_input,
+                "row_size": self.x_input,
+                "input_scale": input_scale,
+                "input_zp": input_zp
+            }
+
+    def calc_softmax_params(self):
+        if self.is_int16xint8:
+            input_scale_beta_rescale = self.input_scale / (10.0 / 65535.0)
+            (self.input_multiplier, self.input_left_shift) = self.quantize_scale(input_scale_beta_rescale)
+        else:
+            input_real_multiplier = min(self.input_scale * (1 << (31 - self.softmax_input_integer_bits)), (1 << 31) - 1)
+            (self.input_multiplier, self.input_left_shift) = self.quantize_scale(input_real_multiplier)
+
+            self.diff_min = ((1 << self.softmax_input_integer_bits) - 1) * \
+                            (1 << (31 - self.softmax_input_integer_bits)) / \
+                            (1 << self.input_left_shift)
+            self.diff_min = math.floor(self.diff_min)
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header(write_common_parameters=False)
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            f.write("#define {}_NUM_ROWS {}\n".format(prefix, self.y_input))
+            f.write("#define {}_ROW_SIZE {}\n".format(prefix, self.x_input))
+            f.write("#define {}_INPUT_MULT {}\n".format(prefix, self.input_multiplier))
+            f.write("#define {}_INPUT_LEFT_SHIFT {}\n".format(prefix, self.input_left_shift))
+            if not self.is_int16xint8:
+                f.write("#define {}_DIFF_MIN {}\n".format(prefix, -self.diff_min))
+            f.write("#define {}_DST_SIZE {}\n".format(prefix, self.x_output * self.y_output))
+
+    def get_softmax_randomized_input_data(self, input_data, input_shape):
+        # Generate or load saved input data unless hardcoded data provided.
+        if input_data is not None:
+            input_data = tf.reshape(input_data, input_shape)
+        else:
+            input_data = self.get_randomized_data(input_shape,
+                                                  self.inputs_table_file,
+                                                  regenerate=self.regenerate_new_input)
+        return input_data
+
+    def generate_data(self, input_data=None, weights=None, biases=None) -> None:
+        input_data = self.get_softmax_randomized_input_data(input_data, [self.y_input, self.x_input])
+
+        if self.is_int16xint8:
+            inttype = tf.int16
+            datatype = "int16_t"
+        else:
+            inttype = tf.int8
+            datatype = "int8_t"
+
+        self.generate_c_array(self.input_data_file_prefix, input_data, datatype=datatype)
+
+        # Generate reference.
+        if self.inInt8outInt16:
+            # Output is int16.
+            datatype = "int16_t"
+
+            # Keras does not support int8 input and int16 output for Softmax.
+            # Using a template json instead.
+            generated_json = self.generate_json_from_template()
+            self.flatc_generate_tflite(generated_json, self.schema_file)
+
+            interpreter = self.Interpreter(model_path=str(self.model_path_tflite),
+                                           experimental_op_resolver_type=self.OpResolverType.BUILTIN_REF)
+            interpreter.allocate_tensors()
+            all_layers_details = interpreter.get_tensor_details()
+            input_layer = all_layers_details[0]
+            output_layer = all_layers_details[1]
+
+            interpreter.set_tensor(input_layer["index"], tf.cast(input_data, tf.int8))
+            interpreter.invoke()
+            output_data = interpreter.get_tensor(output_layer["index"])
+        else:
+            # Create a one-layer Keras model.
+            model = tf.keras.models.Sequential()
+            input_shape = (self.y_input, self.x_input)
+            model.add(tf.keras.layers.Softmax(input_shape=input_shape))
+
+            interpreter = self.convert_and_interpret(model, inttype, tf.expand_dims(input_data, axis=0))
+            output_details = interpreter.get_output_details()
+            interpreter.invoke()
+            output_data = interpreter.get_tensor(output_details[0]["index"])
+
+        self.calc_softmax_params()
+        self.generate_c_array(self.output_data_file_prefix, output_data, datatype=datatype)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()

+ 256 - 0
Tests/UnitTest/svdf_settings.py

@@ -0,0 +1,256 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from test_settings import TestSettings
+
+import tensorflow as tf
+
+
+class SVDFSettings(TestSettings):
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 batches=2,
+                 number_inputs=2,
+                 rank=8,
+                 memory_size=10,
+                 randmin=TestSettings.INT8_MIN,
+                 randmax=TestSettings.INT8_MAX,
+                 input_size=3,
+                 number_units=4,
+                 generate_bias=True,
+                 int8_time_weights=False,
+                 input_scale=0.1,
+                 input_zp=0,
+                 w_1_scale=0.005,
+                 w_1_zp=0,
+                 w_2_scale=0.005,
+                 w_2_zp=0,
+                 bias_scale=0.00002,
+                 bias_zp=0,
+                 state_scale=0.005,
+                 state_zp=0,
+                 output_scale=0.1,
+                 output_zp=0,
+                 interpreter="tensorflow"):
+        super().__init__(dataset,
+                         testtype,
+                         regenerate_weights,
+                         regenerate_input,
+                         regenerate_biases,
+                         schema_file,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         1,
+                         False,
+                         randmin,
+                         randmax,
+                         generate_bias=generate_bias,
+                         interpreter=interpreter)
+        self.batches = batches
+        self.number_units = number_units
+        self.input_size = input_size
+        self.memory_size = memory_size
+        self.rank = rank
+        self.number_filters = self.number_units * self.rank
+        self.time_table_file = self.pregenerated_data_dir + self.testdataset + '/' + 'time_data.txt'
+
+        self.number_inputs = number_inputs
+        self.input_sequence_length = self.number_inputs * self.input_size * self.batches
+
+        self.int8_time_weights = int8_time_weights
+
+        if self.int8_time_weights:
+            self.json_template = "TestCases/Common/svdf_s8_weights_template.json"
+            self.in_activation_max = TestSettings.INT8_MAX
+            self.in_activation_min = TestSettings.INT8_MIN
+
+        else:
+            self.json_template = "TestCases/Common/svdf_template.json"
+            self.in_activation_max = TestSettings.INT16_MAX
+            self.in_activation_min = TestSettings.INT16_MIN
+
+        self.json_replacements = {
+            "memory_sizeXnumber_filters": self.memory_size * self.number_filters,
+            "batches": self.batches,
+            "input_size": self.input_size,
+            "number_filters": self.number_filters,
+            "memory_size": self.memory_size,
+            "number_units": self.number_units,
+            "rank_value": self.rank,
+            "input_scale": input_scale,
+            "input_zp": input_zp,
+            "w_1_scale": w_1_scale,
+            "w_1_zp": w_1_zp,
+            "w_2_scale": w_2_scale,
+            "w_2_zp": w_2_zp,
+            "bias_scale": bias_scale,
+            "bias_zp": bias_zp,
+            "state_scale": state_scale,
+            "state_zp": state_zp,
+            "output_scale": output_scale,
+            "output_zp": output_zp
+        }
+
+    def calc_multipliers_and_shifts(self, input_scale, weights_1_scale, weights_2_scale, state_scale, output_scale):
+        effective_scale_1 = weights_1_scale * input_scale / state_scale
+        effective_scale_2 = state_scale * weights_2_scale / output_scale
+        (self.multiplier_in, self.shift_1) = self.quantize_scale(effective_scale_1)
+        (self.multiplier_out, self.shift_2) = self.quantize_scale(effective_scale_2)
+
+    def write_c_config_header(self) -> None:
+        super().write_c_config_header(write_common_parameters=False)
+
+        filename = self.config_data
+        filepath = self.headers_dir + filename
+        prefix = self.testdataset.upper()
+
+        with open(filepath, "a") as f:
+            f.write("#define {}_MULTIPLIER_IN {}\n".format(prefix, self.multiplier_in))
+            f.write("#define {}_MULTIPLIER_OUT {}\n".format(prefix, self.multiplier_out))
+            f.write("#define {}_SHIFT_1 {}\n".format(prefix, self.shift_1))
+            f.write("#define {}_SHIFT_2 {}\n".format(prefix, self.shift_2))
+            f.write("#define {}_IN_ACTIVATION_MIN {}\n".format(prefix, self.in_activation_min))
+            f.write("#define {}_IN_ACTIVATION_MAX {}\n".format(prefix, self.in_activation_max))
+            f.write("#define {}_RANK {}\n".format(prefix, self.rank))
+            f.write("#define {}_FEATURE_BATCHES {}\n".format(prefix, self.number_filters))
+            f.write("#define {}_TIME_BATCHES {}\n".format(prefix, self.memory_size))
+            f.write("#define {}_INPUT_SIZE {}\n".format(prefix, self.input_size))
+            f.write("#define {}_DST_SIZE {}\n".format(prefix, self.number_units * self.batches))
+            f.write("#define {}_OUT_ACTIVATION_MIN {}\n".format(prefix, self.out_activation_min))
+            f.write("#define {}_OUT_ACTIVATION_MAX {}\n".format(prefix, self.out_activation_max))
+            f.write("#define {}_INPUT_BATCHES {}\n".format(prefix, self.batches))
+            f.write("#define {}_INPUT_OFFSET {}\n".format(prefix, self.input_zero_point))
+            f.write("#define {}_OUTPUT_OFFSET {}\n".format(prefix, self.output_zero_point))
+
+    def generate_data(self, input_data=None, weights=None, biases=None, time_data=None, state_data=None) -> None:
+        if self.int8_time_weights:
+            if not self.use_tflite_micro_interpreter:
+                print("Warning: interpreter tflite_micro must be used for SVDF int8. Skipping generating headers.")
+                return
+
+        # TODO: Make this compatible with newer versions than 2.10.
+        if float(('.'.join(tf.__version__.split('.')[:2]))) > 2.10:
+            print("Warning: tensorflow version > 2.10 not supported for SVDF unit tests. Skipping generating headers")
+            return
+
+        if input_data is not None:
+            input_data = tf.reshape(input_data, [self.input_sequence_length])
+        else:
+            input_data = self.get_randomized_data([self.input_sequence_length],
+                                                  self.inputs_table_file,
+                                                  regenerate=self.regenerate_new_input)
+        self.generate_c_array("input_sequence", input_data)
+
+        if weights is not None:
+            weights_feature_data = tf.reshape(weights, [self.number_filters, self.input_size])
+        else:
+            weights_feature_data = self.get_randomized_data([self.number_filters, self.input_size],
+                                                            self.kernel_table_file,
+                                                            regenerate=self.regenerate_new_weights)
+
+        if time_data is not None:
+            weights_time_data = tf.reshape(time_data, [self.number_filters, self.memory_size])
+        else:
+            weights_time_data = self.get_randomized_data([self.number_filters, self.memory_size],
+                                                         self.time_table_file,
+                                                         regenerate=self.regenerate_new_weights)
+
+        if not self.generate_bias:
+            biases = [0] * self.number_units
+        if biases is not None:
+            biases = tf.reshape(biases, [self.number_units])
+        else:
+            biases = self.get_randomized_data([self.number_units],
+                                              self.bias_table_file,
+                                              regenerate=self.regenerate_new_weights)
+
+        # Generate tflite model
+        generated_json = self.generate_json_from_template(weights_feature_data,
+                                                          weights_time_data,
+                                                          biases,
+                                                          self.int8_time_weights)
+        self.flatc_generate_tflite(generated_json, self.schema_file)
+
+        # Run TFL interpreter
+        interpreter = self.Interpreter(model_path=str(self.model_path_tflite),
+                                       experimental_op_resolver_type=self.OpResolverType.BUILTIN_REF)
+        interpreter.allocate_tensors()
+
+        # Read back scales and zero points from tflite model
+        all_layers_details = interpreter.get_tensor_details()
+        input_layer = all_layers_details[0]
+        weights_1_layer = all_layers_details[1]
+        weights_2_layer = all_layers_details[2]
+        bias_layer = all_layers_details[3]
+        state_layer = all_layers_details[4]
+        output_layer = all_layers_details[5]
+        (input_scale, self.input_zero_point) = self.get_scale_and_zp(input_layer)
+        (weights_1_scale, zero_point) = self.get_scale_and_zp(weights_1_layer)
+        (weights_2_scale, zero_point) = self.get_scale_and_zp(weights_2_layer)
+        (bias_scale, zero_point) = self.get_scale_and_zp(bias_layer)
+        (state_scale, zero_point) = self.get_scale_and_zp(state_layer)
+        (output_scale, self.output_zero_point) = self.get_scale_and_zp(output_layer)
+
+        self.calc_multipliers_and_shifts(input_scale, weights_1_scale, weights_2_scale, state_scale, output_scale)
+
+        # Generate unit test C headers
+        self.generate_c_array("weights_feature", interpreter.get_tensor(weights_1_layer['index']))
+        self.generate_c_array(self.bias_data_file_prefix, interpreter.get_tensor(bias_layer['index']), "int32_t")
+
+        if self.int8_time_weights:
+            self.generate_c_array("weights_time", interpreter.get_tensor(weights_2_layer['index']), datatype='int8_t')
+            self.generate_c_array("state", interpreter.get_tensor(state_layer['index']), "int8_t")
+        else:
+            self.generate_c_array("weights_time", interpreter.get_tensor(weights_2_layer['index']), datatype='int16_t')
+            self.generate_c_array("state", interpreter.get_tensor(state_layer['index']), "int16_t")
+
+        if self.use_tflite_micro_interpreter:
+            interpreter = self.tflite_micro.runtime.Interpreter.from_file(model_path=str(self.model_path_tflite))
+
+        # Generate reference output
+        svdf_ref = None
+        for i in range(self.number_inputs):
+            start = i * self.input_size * self.batches
+            end = i * self.input_size * self.batches + self.input_size * self.batches
+            input_sequence = input_data[start:end]
+            input_sequence = tf.reshape(input_sequence, [self.batches, self.input_size])
+            if self.use_tflite_micro_interpreter:
+                interpreter.set_input(tf.cast(input_sequence, tf.int8), input_layer["index"])
+            else:
+                interpreter.set_tensor(input_layer["index"], tf.cast(input_sequence, tf.int8))
+            interpreter.invoke()
+            if self.use_tflite_micro_interpreter:
+                svdf_ref = interpreter.get_output(0)
+            else:
+                svdf_ref = interpreter.get_tensor(output_layer["index"])
+        self.generate_c_array(self.output_data_file_prefix, svdf_ref)
+
+        self.write_c_config_header()
+        self.write_c_header_wrapper()
+
+    def get_scale_and_zp(self, layer):
+        return (layer['quantization_parameters']['scales'][0], layer['quantization_parameters']['zero_points'][0])

+ 549 - 0
Tests/UnitTest/test_settings.py

@@ -0,0 +1,549 @@
+# SPDX-FileCopyrightText: Copyright 2010-2023 Arm Limited and/or its affiliates <open-source-office@arm.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the License); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an AS IS BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+import os
+import sys
+import json
+import math
+import subprocess
+
+from abc import ABC, abstractmethod
+from packaging import version
+
+import numpy as np
+import tensorflow as tf
+
+
+class TestSettings(ABC):
+
+    # This is the generated test data used by the test cases.
+    OUTDIR = 'TestCases/TestData/'
+
+    # This is input to the data generation. If everything or something is regenerated then it is overwritten.
+    # So it always has the same data as the OUTDIR.
+    # The purpose of the pregen is primarily for debugging, as it is enabling to change a single parameter and see how
+    # output changes (or not changes), without regenerating all input data.
+    # It also convinient when testing changes in the script, to be able to run all test sets again.
+    PREGEN = 'PregeneratedData/'
+
+    INT32_MAX = 2147483647
+    INT32_MIN = -2147483648
+    INT64_MAX = 9223372036854775807
+    INT64_MIN = -9223372036854775808
+    INT16_MAX = 32767
+    INT16_MIN = -32768
+    INT8_MAX = 127
+    INT8_MIN = -128
+
+    REQUIRED_MINIMUM_TENSORFLOW_VERSION = version.parse("2.10")
+
+    CLANG_FORMAT = 'clang-format-12 -i'  # For formatting generated headers.
+
+    def __init__(self,
+                 dataset,
+                 testtype,
+                 regenerate_weights,
+                 regenerate_input,
+                 regenerate_biases,
+                 schema_file,
+                 in_ch,
+                 out_ch,
+                 x_in,
+                 y_in,
+                 w_x,
+                 w_y,
+                 stride_x=1,
+                 stride_y=1,
+                 pad=False,
+                 randmin=np.iinfo(np.dtype('int8')).min,
+                 randmax=np.iinfo(np.dtype('int8')).max,
+                 batches=1,
+                 generate_bias=True,
+                 relu6=False,
+                 out_activation_min=None,
+                 out_activation_max=None,
+                 int16xint8=False,
+                 bias_min=np.iinfo(np.dtype('int32')).min,
+                 bias_max=np.iinfo(np.dtype('int32')).max,
+                 dilation_x=1,
+                 dilation_y=1,
+                 interpreter="tensorflow"):
+
+        if self.INT8_MIN != np.iinfo(np.dtype('int8')).min or self.INT8_MAX != np.iinfo(np.dtype('int8')).max or \
+           self.INT16_MIN != np.iinfo(np.dtype('int16')).min or self.INT16_MAX != np.iinfo(np.dtype('int16')).max or \
+           self.INT32_MIN != np.iinfo(np.dtype('int32')).min or self.INT32_MAX != np.iinfo(np.dtype('int32')).max:
+            raise RuntimeError("Unexpected int min/max error")
+
+        self.use_tflite_micro_interpreter = False
+
+        if interpreter == "tflite_runtime":
+            from tflite_runtime.interpreter import Interpreter
+            from tflite_runtime.interpreter import OpResolverType
+            import tflite_runtime as tfl_runtime
+
+            revision = tfl_runtime.__git_version__
+            version = tfl_runtime.__version__
+            interpreter = "tflite_runtime"
+
+        elif interpreter == "tensorflow":
+            from tensorflow.lite.python.interpreter import Interpreter
+            from tensorflow.lite.python.interpreter import OpResolverType
+
+            revision = tf.__git_version__
+            version = tf.__version__
+            interpreter = "tensorflow"
+
+        elif interpreter == "tflite_micro":
+            from tensorflow.lite.python.interpreter import Interpreter
+            from tensorflow.lite.python.interpreter import OpResolverType
+
+            import tflite_micro
+            self.tflite_micro = tflite_micro
+            self.use_tflite_micro_interpreter = True
+
+            revision = None
+            version = tflite_micro.__version__
+            interpreter = "tflite_micro"
+        else:
+            raise RuntimeError(f"Invalid interpreter {interpreter}")
+
+        self.Interpreter = Interpreter
+        self.OpResolverType = OpResolverType
+
+        self.tensorflow_reference_version = (
+            "// Generated by {} using tensorflow version {} (Keras version {}).\n".format(
+                os.path.basename(__file__), tf.__version__, tf.keras.__version__))
+
+        self.tensorflow_reference_version += ("// Interpreter from {} version {} and revision {}.\n".format(
+            interpreter, version, revision))
+
+        # Randomization interval
+        self.mins = randmin
+        self.maxs = randmax
+
+        self.bias_mins = bias_min
+        self.bias_maxs = bias_max
+
+        self.input_ch = in_ch
+        self.output_ch = out_ch
+        self.x_input = x_in
+        self.y_input = y_in
+        self.filter_x = w_x
+        self.filter_y = w_y
+        self.stride_x = stride_x
+        self.stride_y = stride_y
+        self.dilation_x = dilation_x
+        self.dilation_y = dilation_y
+        self.batches = batches
+        self.test_type = testtype
+        self.has_padding = pad
+
+        self.is_int16xint8 = int16xint8
+
+        if relu6:
+            self.out_activation_max = 6
+            self.out_activation_min = 0
+        else:
+            if out_activation_min is not None:
+                self.out_activation_min = out_activation_min
+            else:
+                self.out_activation_min = self.INT16_MIN if self.is_int16xint8 else self.INT8_MIN
+            if out_activation_max is not None:
+                self.out_activation_max = out_activation_max
+            else:
+                self.out_activation_max = self.INT16_MAX if self.is_int16xint8 else self.INT8_MAX
+
+        # Bias is optional.
+        self.generate_bias = generate_bias
+
+        self.generated_header_files = []
+        self.pregenerated_data_dir = self.PREGEN
+
+        self.config_data = "config_data.h"
+
+        self.testdataset = dataset
+
+        self.kernel_table_file = self.pregenerated_data_dir + self.testdataset + '/' + 'kernel.txt'
+        self.inputs_table_file = self.pregenerated_data_dir + self.testdataset + '/' + 'input.txt'
+        self.bias_table_file = self.pregenerated_data_dir + self.testdataset + '/' + 'bias.txt'
+
+        if self.has_padding:
+            self.padding = 'SAME'
+        else:
+            self.padding = 'VALID'
+
+        self.regenerate_new_weights = regenerate_weights
+        self.regenerate_new_input = regenerate_input
+        self.regenerate_new_bias = regenerate_biases
+        self.schema_file = schema_file
+
+        self.headers_dir = self.OUTDIR + self.testdataset + '/'
+        os.makedirs(self.headers_dir, exist_ok=True)
+
+        self.model_path = "{}model_{}".format(self.headers_dir, self.testdataset)
+        self.model_path_tflite = self.model_path + '.tflite'
+
+        self.input_data_file_prefix = "input"
+        self.weight_data_file_prefix = "weights"
+        self.bias_data_file_prefix = "biases"
+        self.output_data_file_prefix = "output_ref"
+
+    def save_multiple_dim_array_in_txt(self, file, data):
+        header = ','.join(map(str, data.shape))
+        np.savetxt(file, data.reshape(-1, data.shape[-1]), header=header, delimiter=',')
+
+    def load_multiple_dim_array_from_txt(self, file):
+        with open(file) as f:
+            shape = list(map(int, next(f)[1:].split(',')))
+            data = np.genfromtxt(f, delimiter=',').reshape(shape)
+        return data.astype(np.float32)
+
+    def convert_tensor_np(self, tensor_in, converter, *qminmax):
+        w = tensor_in.numpy()
+        shape = w.shape
+        w = w.ravel()
+        if len(qminmax) == 2:
+            fw = converter(w, qminmax[0], qminmax[1])
+        else:
+            fw = converter(w)
+        fw.shape = shape
+        return tf.convert_to_tensor(fw)
+
+    def convert_tensor(self, tensor_in, converter, *qminmax):
+        w = tensor_in.numpy()
+        shape = w.shape
+        w = w.ravel()
+        normal = np.array(w)
+        float_normal = []
+
+        for i in normal:
+            if len(qminmax) == 2:
+                float_normal.append(converter(i, qminmax[0], qminmax[1]))
+            else:
+                float_normal.append(converter(i))
+
+        np_float_array = np.asarray(float_normal)
+        np_float_array.shape = shape
+
+        return tf.convert_to_tensor(np_float_array)
+
+    def get_randomized_data(self, dims, npfile, regenerate, decimals=0, minrange=None, maxrange=None):
+        if not minrange:
+            minrange = self.mins
+        if not maxrange:
+            maxrange = self.maxs
+        if not os.path.exists(npfile) or regenerate:
+            regendir = os.path.dirname(npfile)
+            os.makedirs(regendir, exist_ok=True)
+            if decimals == 0:
+                data = tf.Variable(tf.random.uniform(dims, minval=minrange, maxval=maxrange, dtype=tf.dtypes.int64))
+                data = tf.cast(data, dtype=tf.float32)
+            else:
+                data = tf.Variable(tf.random.uniform(dims, minval=minrange, maxval=maxrange, dtype=tf.dtypes.float32))
+                data = np.around(data.numpy(), decimals)
+                data = tf.convert_to_tensor(data)
+
+            print("Saving data to {}".format(npfile))
+            self.save_multiple_dim_array_in_txt(npfile, data.numpy())
+        else:
+            print("Loading data from {}".format(npfile))
+            data = tf.convert_to_tensor(self.load_multiple_dim_array_from_txt(npfile))
+        return data
+
+    def get_randomized_input_data(self, input_data, input_shape=None):
+        # Generate or load saved input data unless hardcoded data provided
+        if input_shape is None:
+            input_shape = [self.batches, self.y_input, self.x_input, self.input_ch]
+        if input_data is not None:
+            input_data = tf.reshape(input_data, input_shape)
+        else:
+            input_data = self.get_randomized_data(input_shape,
+                                                  self.inputs_table_file,
+                                                  regenerate=self.regenerate_new_input)
+        return input_data
+
+    def get_randomized_bias_data(self, biases):
+        # Generate or load saved bias data unless hardcoded data provided
+        if not self.generate_bias:
+            biases = tf.reshape(np.full([self.output_ch], 0), [self.output_ch])
+        elif biases is not None:
+            biases = tf.reshape(biases, [self.output_ch])
+        else:
+            biases = self.get_randomized_data([self.output_ch],
+                                              self.bias_table_file,
+                                              regenerate=self.regenerate_new_bias,
+                                              minrange=self.bias_mins,
+                                              maxrange=self.bias_maxs)
+        return biases
+
+    def format_output_file(self, file):
+        command_list = self.CLANG_FORMAT.split(' ')
+        command_list.append(file)
+        try:
+            process = subprocess.run(command_list)
+            if process.returncode != 0:
+                print(f"ERROR: {command_list = }")
+                sys.exit(1)
+        except Exception as e:
+            raise RuntimeError(f"{e} from: {command_list = }")
+
+    def write_c_header_wrapper(self):
+        filename = "test_data.h"
+        filepath = self.headers_dir + filename
+
+        print("Generating C header wrapper {}...".format(filepath))
+        with open(filepath, 'w+') as f:
+            f.write(self.tensorflow_reference_version)
+            while len(self.generated_header_files) > 0:
+                f.write('#include "{}"\n'.format(self.generated_header_files.pop()))
+        self.format_output_file(filepath)
+
+    def write_common_config(self, f, prefix):
+        """
+        Shared by conv/depthwise_conv and pooling
+        """
+        f.write("#define {}_FILTER_X {}\n".format(prefix, self.filter_x))
+        f.write("#define {}_FILTER_Y {}\n".format(prefix, self.filter_y))
+        f.write("#define {}_STRIDE_X {}\n".format(prefix, self.stride_x))
+        f.write("#define {}_STRIDE_Y {}\n".format(prefix, self.stride_y))
+        f.write("#define {}_PAD_X {}\n".format(prefix, self.pad_x))
+        f.write("#define {}_PAD_Y {}\n".format(prefix, self.pad_y))
+        f.write("#define {}_OUTPUT_W {}\n".format(prefix, self.x_output))
+        f.write("#define {}_OUTPUT_H {}\n".format(prefix, self.y_output))
+
+    def write_c_common_header(self, f):
+        f.write(self.tensorflow_reference_version)
+        f.write("#pragma once\n")
+
+    def write_c_config_header(self, write_common_parameters=True) -> None:
+        filename = self.config_data
+
+        self.generated_header_files.append(filename)
+        filepath = self.headers_dir + filename
+
+        prefix = self.testdataset.upper()
+
+        print("Writing C header with config data {}...".format(filepath))
+        with open(filepath, "w+") as f:
+            self.write_c_common_header(f)
+            if (write_common_parameters):
+                f.write("#define {}_OUT_CH {}\n".format(prefix, self.output_ch))
+                f.write("#define {}_IN_CH {}\n".format(prefix, self.input_ch))
+                f.write("#define {}_INPUT_W {}\n".format(prefix, self.x_input))
+                f.write("#define {}_INPUT_H {}\n".format(prefix, self.y_input))
+                f.write("#define {}_DST_SIZE {}\n".format(
+                    prefix, self.x_output * self.y_output * self.output_ch * self.batches))
+                f.write("#define {}_INPUT_SIZE {}\n".format(prefix, self.x_input * self.y_input * self.input_ch))
+                f.write("#define {}_OUT_ACTIVATION_MIN {}\n".format(prefix, self.out_activation_min))
+                f.write("#define {}_OUT_ACTIVATION_MAX {}\n".format(prefix, self.out_activation_max))
+                f.write("#define {}_INPUT_BATCHES {}\n".format(prefix, self.batches))
+        self.format_output_file(filepath)
+
+    def get_data_file_name_info(self, name_prefix) -> (str, str):
+        filename = name_prefix + "_data.h"
+        filepath = self.headers_dir + filename
+        return filename, filepath
+
+    def generate_c_array(self, name, array, datatype="int8_t", const="const ") -> None:
+        w = None
+
+        if type(array) is list:
+            w = array
+            size = len(array)
+        elif type(array) is np.ndarray:
+            w = array
+            w = w.ravel()
+            size = w.size
+        else:
+            w = array.numpy()
+            w = w.ravel()
+            size = tf.size(array)
+
+        filename, filepath = self.get_data_file_name_info(name)
+        self.generated_header_files.append(filename)
+
+        print("Generating C header {}...".format(filepath))
+        with open(filepath, "w+") as f:
+            self.write_c_common_header(f)
+            f.write("#include <stdint.h>\n\n")
+            if size > 0:
+                f.write(const + datatype + " " + self.testdataset + '_' + name + "[%d] =\n{\n" % size)
+                for i in range(size - 1):
+                    f.write("  %d,\n" % w[i])
+                f.write("  %d\n" % w[size - 1])
+                f.write("};\n")
+            else:
+                f.write(const + datatype + " *" + self.testdataset + '_' + name + " = NULL;\n")
+        self.format_output_file(filepath)
+
+    def set_output_dims_and_padding(self, output_x, output_y):
+        self.x_output = output_x
+        self.y_output = output_y
+        if self.has_padding:
+            # Take dilation into account.
+            filter_x = (self.filter_x - 1) * self.dilation_x + 1
+            filter_y = (self.filter_y - 1) * self.dilation_y + 1
+
+            pad_along_width = max((self.x_output - 1) * self.stride_x + filter_x - self.x_input, 0)
+            pad_along_height = max((self.y_output - 1) * self.stride_y + filter_y - self.y_input, 0)
+            pad_top = pad_along_height // 2
+            pad_left = pad_along_width // 2
+            self.pad_x = pad_left
+            self.pad_y = pad_top
+        else:
+            self.pad_x = 0
+            self.pad_y = 0
+
+    @abstractmethod
+    def generate_data(self, input_data=None, weights=None, biases=None) -> None:
+        ''' Must be overriden '''
+
+    def quantize_scale(self, scale):
+        significand, shift = math.frexp(scale)
+        significand_q31 = round(significand * (1 << 31))
+        return significand_q31, shift
+
+    def get_calib_data_func(self, n_inputs, shape):
+
+        def representative_data_gen():
+            representative_testsets = []
+            if n_inputs > 0:
+                for i in range(n_inputs):
+                    representative_testsets.append(np.ones(shape, dtype=np.float32))
+                yield representative_testsets
+            else:
+                raise RuntimeError("Invalid number of representative test sets: {}. Must be more than 0".format(
+                    self.test_type))
+
+        return representative_data_gen
+
+    def convert_and_interpret(self, model, inttype, input_data=None, dataset_shape=None):
+        """
+        Compile and convert a model to Tflite format, run interpreter and allocate tensors.
+        """
+        model.compile(loss=tf.keras.losses.categorical_crossentropy,
+                      optimizer=tf.keras.optimizers.Adam(),
+                      metrics=['accuracy'])
+        n_inputs = len(model.inputs)
+
+        if dataset_shape:
+            representative_dataset_shape = dataset_shape
+        else:
+            representative_dataset_shape = (self.batches, self.y_input, self.x_input, self.input_ch)
+
+        converter = tf.lite.TFLiteConverter.from_keras_model(model)
+
+        representative_dataset = self.get_calib_data_func(n_inputs, representative_dataset_shape)
+
+        converter.optimizations = [tf.lite.Optimize.DEFAULT]
+        converter.representative_dataset = representative_dataset
+        if self.is_int16xint8:
+            converter.target_spec.supported_ops = [
+                tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
+            ]
+        else:
+            converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
+        converter.inference_input_type = inttype
+        converter.inference_output_type = inttype
+        tflite_model = converter.convert()
+
+        os.makedirs(os.path.dirname(self.model_path_tflite), exist_ok=True)
+        with open(self.model_path_tflite, "wb") as model:
+            model.write(tflite_model)
+
+        interpreter = self.Interpreter(model_path=str(self.model_path_tflite),
+                                       experimental_op_resolver_type=self.OpResolverType.BUILTIN_REF)
+        interpreter.allocate_tensors()
+
+        output_details = interpreter.get_output_details()
+        (self.output_scale, self.output_zero_point) = output_details[0]['quantization']
+
+        if input_data is not None:
+            input_details = interpreter.get_input_details()
+            (self.input_scale, self.input_zero_point) = input_details[0]['quantization']
+
+            # Set input tensors
+            interpreter.set_tensor(input_details[0]["index"], tf.cast(input_data, inttype))
+
+        return interpreter
+
+    def generate_json_from_template(self,
+                                    weights_feature_data=None,
+                                    weights_time_data=None,
+                                    bias_data=None,
+                                    int8_time_weights=False):
+        """
+        Takes a json template and parameters as input and creates a new json file.
+        """
+        generated_json_file = self.model_path + '.json'
+
+        with open(self.json_template, 'r') as in_file, open(generated_json_file, 'w') as out_file:
+            # Update shapes, scales and zero points
+            data = in_file.read()
+            for item, to_replace in self.json_replacements.items():
+                data = data.replace(item, str(to_replace))
+
+            data = json.loads(data)
+
+            # Update weights and bias data
+            if weights_feature_data is not None:
+                w_1_buffer_index = 1
+                data["buffers"][w_1_buffer_index]["data"] = self.to_bytes(weights_feature_data.numpy().ravel(), 1)
+            if weights_time_data is not None:
+                w_2_buffer_index = 2
+                if int8_time_weights:
+                    data["buffers"][w_2_buffer_index]["data"] = self.to_bytes(weights_time_data.numpy().ravel(), 1)
+                else:
+                    data["buffers"][w_2_buffer_index]["data"] = self.to_bytes(weights_time_data.numpy().ravel(), 2)
+            if bias_data is not None:
+                bias_buffer_index = 3
+                data["buffers"][bias_buffer_index]["data"] = self.to_bytes(bias_data.numpy().ravel(), 4)
+
+            json.dump(data, out_file, indent=2)
+
+        return generated_json_file
+
+    def flatc_generate_tflite(self, json_input, schema):
+        flatc = 'flatc'
+        if schema is None:
+            raise RuntimeError("A schema file is required.")
+        command = "{} -o {} -c -b {} {}".format(flatc, self.headers_dir, schema, json_input)
+        command_list = command.split(' ')
+        try:
+            process = subprocess.run(command_list)
+            if process.returncode != 0:
+                print(f"ERROR: {command = }")
+                sys.exit(1)
+        except Exception as e:
+            raise RuntimeError(f"{e} from: {command = }. Did you install flatc?")
+
+    def to_bytes(self, tensor_data, type_size) -> bytes:
+        result_bytes = []
+
+        if type_size == 1:
+            tensor_type = np.uint8
+        elif type_size == 2:
+            tensor_type = np.uint16
+        elif type_size == 4:
+            tensor_type = np.uint32
+        else:
+            raise RuntimeError("Size not supported: {}".format(type_size))
+
+        for val in tensor_data:
+            for byte in int(tensor_type(val)).to_bytes(type_size, 'little'):
+                result_bytes.append(byte)
+
+        return result_bytes

Некоторые файлы не были показаны из-за большого количества измененных файлов