Results 1 
9 of
9
Learning from interpolated images using neural networks for digital forensics
 Proc. Computer Vision and Pattern Recognition
, 2010
"... Interpolated images have data redundancy, and special correlation exists among neighboring pixels, which is a crucial clue in digital forensics. We design a neural network based framework to approximate the stylized computational rules of interpolation algorithms for learning statistical interpixel ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Interpolated images have data redundancy, and special correlation exists among neighboring pixels, which is a crucial clue in digital forensics. We design a neural network based framework to approximate the stylized computational rules of interpolation algorithms for learning statistical interpixel correlation of interpolated images. The interpolation process is cognized from the interpolation results. Experiments are carried out on camera builtin Color Filter Array interpolation and super resolution: Three classifiers are trained to classify image interpolation algorithms, identify source cameras and uncover digital forgeries. Like the Wiener attack in watermarking, the special correlation can be reduced or transferred it to another image by our learned network. 1.
A Regularized Learning Method for Neural Networks Based on Sensitivity Analysis
"... is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other ..."
Abstract
 Add to MetaCart
(Show Context)
is a learning method for twolayer feedforward neural networks, based on sensitivity analysis, that calculates the weights by solving a system of linear equations. Therefore, there is an important saving in computational time which significantly enhances the behavior of this method compared to other learning algorithms. This paper introduces a generalization of the SBLLM by adding a regularization term in the cost function. The theoretical basis for the method is given and its performance is illustrated. 1
unknown title
"... Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is ..."
Abstract
 Add to MetaCart
Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is the problem of stepsizes, or learning rates, adaptation. A scheme of determining stepsizes is introduced here that satisfies the following requirements: (i) it does not need any auxiliary problemdependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with the 2D Gabor function approximation is presented.
unknown title
"... Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is ..."
Abstract
 Add to MetaCart
(Show Context)
Fixed point method of stepsize estimation for online neural network training Paweł Wawrzyński, Member, IEEE Abstract — This paper considers online training of feadforward neural networks. Training examples are only available sampled randomly from a given generator. What emerges in this setting is the problem of stepsizes, or learning rates, adaptation. A scheme of determining stepsizes is introduced here that satisfies the following requirements: (i) it does not need any auxiliary problemdependent parameters, (ii) it does not assume any particular loss function that the training process is intended to minimize, (iii) it makes the learning process stable and efficient. An experimental study with the 2D Gabor function approximation is presented.
unknown title
"... A distributed learning algorithm based on twolayer artificial neural networks and genetic algorithms ..."
Abstract
 Add to MetaCart
(Show Context)
A distributed learning algorithm based on twolayer artificial neural networks and genetic algorithms
Global Congress on Intelligent Systems Fast Learning Neural Network using modified Corners Algorithm
"... In the past we have seen various developments in the philosophy and application of neural networks. We today have backpropagation algorithm, Hopfield networks, perceptrons, etc All these are very precise tools which model the data very well. But unfortunately, the problem being faced these days is o ..."
Abstract
 Add to MetaCart
(Show Context)
In the past we have seen various developments in the philosophy and application of neural networks. We today have backpropagation algorithm, Hopfield networks, perceptrons, etc All these are very precise tools which model the data very well. But unfortunately, the problem being faced these days is of training the neural network in short span of time, over the test data. All the above mentioned tools may not be useful in various situations where the neural network needs to be trained rapidly. Hence the solutions offered to the same were the Corners rule and the associated CC1 to CC4 algorithms. All these had various pros and cons. This paper uses a different type of modeling to represent data and hence solve the problem of fast learning. Here we have taken the help of distance separation of training data and an unknown input to calculate the most probable output in the neural network. This algorithm is better than the others as it does not place any special restrictions on the inputs, which was the case with CC3. Also the algorithm uses an input model very similar to the traditional model, in terms of inputs and outputs. Hence the users may find it very easy to switch between the traditional neural network style and the network proposed in this paper. The algorithm sets up a neural network. The weights are assigned by looking at the inputs. In testing, the inputs are provided and the most probable output is calculated. The neural network uses a single hidden layer. The best neurons of the hidden layer are invoked at every input. This algorithm was trained on some points of a 2 color picture. When we tried to reproduce it, the results showed the algorithm was efficient and accurate
LEVENBERGMARQUARDT LEARNING NEURAL NETWORK FOR ADAPTIVE PRE DISTORTION FOR TIMEVARYING HPA WITH MEMORY IN OFDM SYSTEMS
"... This paper presents a new adaptive predistortion (PD) technique, based on neural networks (NN) with tap delay line for linearization of High Power Amplifier (HPA) exhibiting memory effects. The adaptation, based on iterative algorithm, is derived from direct learning for the NN PD. Equally importan ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents a new adaptive predistortion (PD) technique, based on neural networks (NN) with tap delay line for linearization of High Power Amplifier (HPA) exhibiting memory effects. The adaptation, based on iterative algorithm, is derived from direct learning for the NN PD. Equally important, the paper puts forward the studies concerning the application of different NN learning algorithms in order to determine the most adequate for this NN PD. This comparison examined through computer simulation for 64 carriers and 16QAM OFDM system, is based on some quality measure (Mean Square Error), the required training time to reach a particular quality level and computation complexity. The chosen adaptive predistortion (NN structure associated with an adaptive algorithm) have a low complexity, fast convergence and best performance. 1.
A new initialization method for neural networks using sensitivity analysis Bertha GuijarroBerdiñas, Oscar FontenlaRomero,
"... The learning methods for feedforward neural networks find the network’s optimal parameters through a gradient descent mechanism starting from an initial state of the parameters. This initial state influences both in convergence speed and the error that finally is achieved. In this paper, we present ..."
Abstract
 Add to MetaCart
The learning methods for feedforward neural networks find the network’s optimal parameters through a gradient descent mechanism starting from an initial state of the parameters. This initial state influences both in convergence speed and the error that finally is achieved. In this paper, we present a sensitivity analysis based initialization method for twolayer feedforward neural networks, which uses a linear procedure to obtain the weights of each layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, and finally the weights are calculated using a linear system of equations. This new method presents the advantage of achieving a good solution in just one epoch using few computational time. In this paper, we explore the use of this method, as an initialization procedure, using several data sets and learning algorithms and comparing the performance with other wellknown initialization methods.
A New Weight Initialization Method Using Cauchy’s Inequality Based on Sensitivity Analysis
, 2011
"... In this paper, an efficient weight initialization method is proposed using Cauchy’s inequality based on sensitivity analysis to improve the convergence speed in single hidden layer feedforward neural networks. The proposed method ensures that the outputs of hidden neurons are in the active region w ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, an efficient weight initialization method is proposed using Cauchy’s inequality based on sensitivity analysis to improve the convergence speed in single hidden layer feedforward neural networks. The proposed method ensures that the outputs of hidden neurons are in the active region which increases the rate of convergence. Also the weights are learned by minimizing the sum of squared errors and obtained by solving linear system of equations. The proposed method is simulated on various problems. In all the problems the number of epochs and time required for the proposed method is found to be minimum compared with other weight initialization methods.