Results 1 
6 of
6
Learning from interpolated images using neural networks for digital forensics
 Proc. Computer Vision and Pattern Recognition
, 2010
"... Interpolated images have data redundancy, and special correlation exists among neighboring pixels, which is a crucial clue in digital forensics. We design a neural network based framework to approximate the stylized computational rules of interpolation algorithms for learning statistical interpixel ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Interpolated images have data redundancy, and special correlation exists among neighboring pixels, which is a crucial clue in digital forensics. We design a neural network based framework to approximate the stylized computational rules of interpolation algorithms for learning statistical interpixel correlation of interpolated images. The interpolation process is cognized from the interpolation results. Experiments are carried out on camera builtin Color Filter Array interpolation and super resolution: Three classifiers are trained to classify image interpolation algorithms, identify source cameras and uncover digital forgeries. Like the Wiener attack in watermarking, the special correlation can be reduced or transferred it to another image by our learned network. 1.
Global Congress on Intelligent Systems Fast Learning Neural Network using modified Corners Algorithm
"... In the past we have seen various developments in the philosophy and application of neural networks. We today have backpropagation algorithm, Hopfield networks, perceptrons, etc All these are very precise tools which model the data very well. But unfortunately, the problem being faced these days is o ..."
Abstract
 Add to MetaCart
In the past we have seen various developments in the philosophy and application of neural networks. We today have backpropagation algorithm, Hopfield networks, perceptrons, etc All these are very precise tools which model the data very well. But unfortunately, the problem being faced these days is of training the neural network in short span of time, over the test data. All the above mentioned tools may not be useful in various situations where the neural network needs to be trained rapidly. Hence the solutions offered to the same were the Corners rule and the associated CC1 to CC4 algorithms. All these had various pros and cons. This paper uses a different type of modeling to represent data and hence solve the problem of fast learning. Here we have taken the help of distance separation of training data and an unknown input to calculate the most probable output in the neural network. This algorithm is better than the others as it does not place any special restrictions on the inputs, which was the case with CC3. Also the algorithm uses an input model very similar to the traditional model, in terms of inputs and outputs. Hence the users may find it very easy to switch between the traditional neural network style and the network proposed in this paper. The algorithm sets up a neural network. The weights are assigned by looking at the inputs. In testing, the inputs are provided and the most probable output is calculated. The neural network uses a single hidden layer. The best neurons of the hidden layer are invoked at every input. This algorithm was trained on some points of a 2 color picture. When we tried to reproduce it, the results showed the algorithm was efficient and accurate
unknown title
"... Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is ..."
Abstract
 Add to MetaCart
Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is the problem of stepsizes, or learning rates, adaptation. A scheme of determining stepsizes is introduced here that satisfies the following requirements: (i) it does not need any auxiliary problemdependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with the 2D Gabor function approximation is presented.
A Regularized Learning Method for Neural Networks Based on Sensitivity Analysis
"... is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other ..."
Abstract
 Add to MetaCart
is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other learning algorithms. This paper introduces a generalization of the SBLLM by adding a regularization term in the cost function. The theoretical basis for the method is given and its performance is illustrated. 1
unknown title
"... Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is ..."
Abstract
 Add to MetaCart
Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is the problem of stepsizes, or learning rates, adaptation. A scheme of determining stepsizes is introduced here that satisfies the following requirements: (i) it does not need any auxiliary problemdependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with the 2D Gabor function approximation is presented.
unknown title
"... A distributed learning algorithm based on twolayer artificial neural networks and genetic algorithms ..."
Abstract
 Add to MetaCart
A distributed learning algorithm based on twolayer artificial neural networks and genetic algorithms