Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Tensorflow.Keras v0.15 usable with .NET 4.8 ? #1283

Open
abrguyt opened this issue Jan 21, 2025 · 3 comments
Open

[Question]: Tensorflow.Keras v0.15 usable with .NET 4.8 ? #1283

abrguyt opened this issue Jan 21, 2025 · 3 comments

Comments

@abrguyt
Copy link

abrguyt commented Jan 21, 2025

Description

I'm new to Tensorflow/Keras development but seasoned in .NET - and intend to use the Keras functionality in a .NET 4.8 application on Windows 10/11.

The most recent version available on NuGet seems to be Tensorflow.Keras v 0.15 (6 nov 2023), I've got all code for data preparation/model fitting ready, but it seems the code to convert a float[][] input array (training_X) to a Tensorflow.Numpy.NDArray results in an 'NotImplementedException':

np.array(training_X)

I can use the NumSharp package - with that the 'np.array(training_X)' would work, but the Keras 'model.fit()' only accepts the Tensorflow.Numpy.NDArray type - so I can't use the NumSharp NDArray for that it seems.

Any suggestions how to resolve this? This is a module of a larger project - so I can't just switch to .NET (core) 9 (yet).

Maybe there is way to convert the NumSharp NDArray to a Tensorflow.Numpy.NDArray? My training/validation/test data is all provided in float[][] arrays (for both X and Y).

Alternatives

No response

@bhairav-thakkar
Copy link

I have used it on .Net Framework 4.7.2. I guess I do remember having make it work on .Net Framework 4.8 as well. However, the issue you are mentioning is because of the code base. The code i still evolving. There are a number of functions that are not yet implemented.

For the problem you mentioned about getting Tensorflow.Numpy.NDArray, do you think the following will help?:

private Tensorflow.NumPy.NDArray ConvertToTfNpArray(NumSharp.NDArray npArray)
{
return new Tensorflow.NumPy.NDArray(npArray.ToArray()).reshape(npArray.shape);
}

NumSharp.NDArray from the arrays you have should be easy I guess.

@abrguyt
Copy link
Author

abrguyt commented Jan 22, 2025

Thanks, that suggestion was really helpful - with that conversion I can get the model running.

The home page mentions what when adding the custom repo URL as NuGet source I may be able to use newer version than just the v 0.15 available on the default (https://api.nuget.org/v3/index.json) repo ?

Although the model seems to fit and report losses/accuracies, both (Keras) model.summary(0 and model.fit() do not produce ANY output in the Console (even with verbose: 2 when calling fit()) - am I overlooking a logging setting somewhere?

Many thanks for your helpful input.

@abrguyt
Copy link
Author

abrguyt commented Jan 22, 2025

In the meantime I have been successful in fitting a Keras model with decent loss and accuracy (using the workarounds your described), I can make reasonable predictions with a trained model but I'm running into a peculiar situation where after training the model is saved with 'model.save_weights("file.hc5")' (because AFAIK full model save is not available in v 0.15 of Tensorflow.Keras).

When the exact same Keras model configuration is re-created and the previously trained weights are loaded with 'model.load_weights("file.hc5")' .. a prediction with an identical x input tensor will result in a very different y output tensor.

But if the trained model is given 'model.load_weights("file.hc5")' immediately after 'model.save_weights("file.hc5")' the prediction results are identical as with the freshly trained model - so the weights storage itself seems to be fine.

So purely using 'model.load_weights("file.hc5")' on a newly created Keras model results in vastly different outcomes. This is puzzling as one method creates an identical Keras model in both cases. It seems as the initial trained model has state other than the weights that influences the prediction outcome.

Any ideas/suggestions? Without the ability to resurrect and use a trained model (or its weights with same config), the usefulness of the whole prediction engine disappears. I haven't tried newer versions than v 0.15 yet (if available).

The method for creating and using the Keras model - the code is very basic/straightforward:

private IModel CreateModel(float learningRate = 0.001f)
{
	// input layer
	var inputs = tf.keras.Input(shape: 17);

	var layers = new LayersApi();

	// hidden layer 1
	var outputs = layers.Dense(units: 64, activation: tf.keras.activations.Relu).Apply(inputs);

	// hidden layer 2
	outputs = layers.Dense(units: 32, activation: tf.keras.activations.Relu).Apply(outputs);

	// output layer
	outputs = layers.Dense(units: 2, activation: tf.keras.activations.Linear).Apply(outputs);

	var model = tf.keras.Model(inputs: inputs, outputs: outputs);

	model.summary();

	var optimiser = new Adam(learning_rate: learningRate);

	// use MeanSquaredError as the loss function
	var lossFunc = new MeanSquaredError();

	model.compile(
		optimizer: optimiser,
		loss: lossFunc,
		// custom accuracy IMetricFunc instance using a tolerance threshold
		metrics: [new CustomAccuracyTolerance()]
	);

	return model;
}

Training done with:

IModel TrainModel(
	NDArray training_X, NDArray training_Y,
	NDArray validation_X, NDArray validation_Y)
{
	var model = CreateModel(learningRate: 0.001f);

	var batchSize = 10;
	var stepsNeeded = training_X.Length / batchSize;
	var epochNr = 100;

	// define early stopping
	var earlyStopping = new EarlyStopping(
		new CallbackParams() {
			Model = model,
			Epochs = epochNr,
			Steps = stepsNeeded,
			Verbose = 2
		},
		monitor: "val_loss",
		patience: 10,
		restore_best_weights: true);

	var history = model.fit(
		training_X,
		training_Y,
		batchSize,
		epochNr,
		verbose: 2,
		callbacks: [earlyStopping],
		validation_data: (validation_X, validation_Y));

	return model;
}

The training of the model, predictions and weight save/load then looks like this:

// initial model creation + training
var model = TrainModel(training_X, training_Y, validation_X, validation_Y);

// try prediction with one data sample X
var predict_Y_a = model.Predict(sample_X);

// save weights
model.save_weights("file.hc5");

// immediately reload same weights in same model as test
model.load_weights("file.hc5");

// re-try prediction with same data sample X; this will yield same result as 'predict_Y_a' as expected
var predict_Y_b = model.Predict(sample_X);

// now we re-create the same model configuration (in real-life this happens at later stage/location)
// but without training
var model_2 = CreateModel(learningRate: 0.001f);

// reload same weights in newly re-created model
model_2.load_weights("file.hc5");

// re-try prediction with same data sample X
// this will now yield a very different result as 'predict_Y_a'/ 'predict_Y_b'
var predict_Y_c = model_2.Predict(sample_X);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants