Post a Project

RedPel

Subtitle

1. Finger image quality assessment features – definitions and evaluation
Project Code : JIP1601                    Year : 2016 (IEEE)

Abstract: Finger image quality assessment is a crucial part of any system where a high biometric performance and user satisfaction is desired. Several algorithms measuring selected aspects of finger image quality have been proposed in the

literature, yet only few of them have found their way into quality assessment algorithms used in practice. The authors provide comprehensive algorithm descriptions and make available implementations of adaptations of ten quality

assessment algorithms from the literature which operates at the local or the global image level. They evaluate the performance on four datasets in terms of the capability in determining samples causing false non-matches and by their

Spearman correlation with sample utility. The authors’ evaluation shows that both the capability in rejecting samples causing false non-matches and the correlation between features varies depending on the dataset.

2. Extra Security Using Graphical Password to Data
Project Code : JIP1602                       Year : 2016 (IEEE)

Abstract—Many security primitives are based on hard mathematical problems. Using hard AI problems for security is emerging as an exciting new paradigm, but has been underexploredIn this paper, we present a new security primitive based on hard AI problems, namely, a novel family of graphical password systems built on top of Captcha technology, which we call Captcha as graphical passwords (CaRP). CaRP

is both a Captcha and a graphical password scheme. CaRP addresses a number of security problems altogether, such as online guessing attacks, relay attacks, and, if combined with dual-view technologies, shoulder-surfing attacks. Notably, a CaRP

password can be found only probabilistically by automatic online guessing attacks even if the password is in the search set. CaRP also offers a novel approach to address the well-known image hotspot problem in popular graphical password systems, such as PassPoints, that often leads to weak password choices. CaRP is not a panacea, but it offers reasonable security and usability and appears to fit well with some practical applications for improving online security.

3. An Efficient Privacy-Preserving RankedKeyword Search Method
Project Code : JIP1603                          Year : 2016 (IEEE)

Abstract—Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy preserving. Therefore it is essential to develop efficient and reliable ciphertext search techniques. One challenge is that the relationship between documents will be normally concealed in the process of encryption, which will lead to significant search accuracy performance degradation. Also the volume of data in data centers has experienced a dramatic growth. This will make it even more challenging to design ciphertext search schemes that can provide efficient and reliable online information retrieval on large volume of encrypted data. In this paper, a hierarchical clustering method is proposed to support more search semantics and also to meet the demand for fast ciphertext search within a big data environment. The proposed hierarchical approach clusters the documents based on the minimum relevance threshold, and then partitions the resulting clusters into sub-clusters until the constraint on the maximum size of cluster is reached. In the search phase, this approach can reach a linear computational complexity against an exponential size increase of document collection. In order to verify the authenticity of search results, a structure called minimum hash sub-tree is designed in this paper. Experiments have been conducted using the collection set built from the IEEE Xplore. The results show that with a sharp increase of documents in the dataset the search time of the proposed method increases linearly whereas the search time of the traditional method increases exponentially. Furthermore, the proposed method has an advantage over the traditional method in the rank privacy and relevance of retrieved

documents.

4. An Efficient SVD-Based Method for Image Denoising.
Project Code : JIP1604                       Year : 2016 (IEEE)

Abstract—Nonlocal self-similarity of images has attracted considerable interest in the field of image processing and has led to several state-of-the-art image denoising algorithms, such as block matching and 3-D, principal component analysis with local

pixel grouping, patch-based locally optimal wiener, and spatially adaptive iterative singular-value thresholding. In this paper, we propose a computationally simple denoising algorithm using the nonlocal self-similarity and the low-rank approximation (LRA). The proposed method consists of three basic steps. First, our method classifies similar image patches by the block-matching technique to form the similar patch groups, which results in the similar patch groups to be low rank. Next, each group of similar patches is factorized by singular value decomposition (SVD)

and estimated by taking only a few largest singular values and corresponding singular vectors. Finally, an initial denoised image is generated by aggregating all processed patches. For low-rank matrices, SVD can provide the optimal energy

compaction in the least square sense. The proposed method exploits the optimal energy compaction property of SVD to lead an LRA of similar patch groups. Unlike other SVDbased methods, the LRA in SVD domain avoids learning the local basis for representing image patches, which usually is computationally expensive. The experimental results demonstrate that the proposed method can effectively reduce noise and be competitive with the current state-of-the-art denoising algorithms

in terms of both quantitative metrics and subjective visual quality.

5. Comparison of steganography using differents Algorithm.
Project Code : JIP1605               Year : 2016 (IEEE)

Abstract :  Steganography is an art to hide the existence of important information in a cover file. It is an information hiding technique which is used for sending and receiving confidential data over internet. Steganography is done in two part first is to

embed data in regular computer file and the second part to extract that information. Secret data can be embed in various regular computer file but video files plays an important role by providing more embedding space. This paper will provide a survey of various research papers on video stegnography.

6. Object detection in video/ image.
Project Code : JIP1606                   Year : 2016 (IEEE)

Abstract :  Object tracking is an important task within the field of computer vision. The proliferation of high-powered computers, the availability of high quality and inexpensive video cameras, and the interesting need for automated video analysis has generated a great deal of  interest in object tracking.In its simplest form, tracking can be defined as a method of following an object through successive image frames to determine its relative movement with respect to other objects. In other words, a tracker assigns consistent labels to the tracked objects in different frames of video.

One can simplify tracking by imposing constraints on the motion or appearance of objects. One can further constrain the object motion to be of constant velocity or acceleration based on prior information. Prior knowledge about the number and the size of objects, or the object appearance and shape can also be used to simplify the problem.


7. Handwritten Chinese Text Recognition by Integrating Multiple Contexts
Project Code : JIP1607                        Year : 2016 (IEEE)

Abstract —We describe Google’s online handwriting recognition system that currently supports 22 scripts and 97 languages. The system’s focus is on fast, high-accuracy text entry for mobile, touch-enabled devices. We use a combination of state-of-the-art components and combine them with novel additions in a flexible framework. This architecture allows us to easily transfer improvements between languages and scripts. This made it possible to build recognizers for languages that, to the best of our knowledge, are not handled by any other online handwriting recognition system. The approach also enabled us to use the same architecture both on very powerful machines for recognition in the cloud as well as on mobile devices with more limited computational power by changing some of the settings of the system. In this paper we give a general overview of the system architecture and the novel components, such as unified time- and position-based input interpretation, trainable segmentation, minimum-error rate training for feature combination, and a

cascade of pruning strategies. We present experimental results for different setups. The system is currently publicly available in severalGoogle products, for example in Google Translate and as an input method for Android devices.

8. Learn to Personalized Image Search from the Photo Sharing Websites
Project Code : JIP1608                         Year : 2016 (IEEE)

Abstract— : As the amount of Web information grows rapidly, search engines must be able to retrieve information according to the user's preference. In this paper, we propose a new web search personalization approach that captures the user's interests and preferences in the form of concepts by mining search results and their clickthroughs. Due to the important role location information plays in mobile search, we separate concepts into content concepts and location concepts, and organize them into ontologies to create an ontology-based, multi-facet (OMF) prole to precisely capture the user's content and location interests and hence improve the search accuracy. Moreover, recognizing the fact that different users and queries may have different emphases on content and location information, we introduce the notion

of content and location entropies to measure the amount of content and location information associated with a query, and click content and location entropies to measure how much the user is interested in the content and location information in

the results. Accordingly, we propose to dene personalization effectiveness based on the entropies and use it to balance the weights between the content and location facets. Finally, based on the derived ontologies and personalization effectiveness, we train an SVM to adapt a personalized ranking function for re-ranking of future search. We conduct extensive experiments to compare the precision produced by our OMF proles and that of a baseline method. Experimental results show that OMF

improves the precision signicantly compared to the baseline.

9. Enhance Security for online database.
Project Code : JIP1609                       Year : 2016 (IEEE)

Abstract — In this era due to unbelievable development in internet, various online attacks has been increased. From all such attacks most popular attack is phishing. This attacks are done for extracting confidential information such as banking information, passwords from unsuspecting victims for fraud purposes. Confidential data can’t be directly uploaded on website since it is risky. Here in this paper data is encrypted in video and visual cryptography for login purpose in our online database system for providing more security .

10. Face detection in video/image.
Project Code : JIP1610                        Year : 2016 (IEEE)

Abstract —This paper proposes a generic methodology for the semi-automatic generation of reliable position annotations for evaluating multi-camera people-trackers on large video data sets. Most of the annotation data are automatically computed, by estimating a consensus tracking result from multiple existing trackers and people detectors and classifying it as either reliable or not. A small subset of the data, composed of tracks with insufficient reliability, is verified by a human using a simple binary decision task, a process faster than marking the correct person position. The proposed framework is generic and can handle additional trackers. We present results on a data set of ∼6 h captured by 4 cameras, featuring a person in a holiday flat, performing activities such as walking, cooking, eating, cleaning, and watching TV. When aiming for a tracking accuracy of 60 cm, 80% of all video frames are automatically annotated. The annotations for the remaining 20% of the frames were added after human verification of an automatically selected subset of data. This involved ∼2.4 h of manual labor. According to a subsequent comprehensive visual inspection to judge the annotation procedure, we found 99% of the automatically annotated frames to be correct. We provide guidelines on how to apply the proposed

methodology to new data sets. We also provide an exploratory study for the multi-target case, applied on the existing and new benchmark video sequences.

11. Elliptic Curve Cryptography.
Project Code : JIP1611                        Year : 2016 (IEEE)

Abstract — Secure and efficient data storage is needed in the cloud environment in modern era of information technology industry. In the present scenario the cloud verifies the authenticity of the cloud services without the knowledge of user’s identity. The cloud provides massive data access directly through the internet. Centralized storage mechanism is followed here for effective accessing of data. Cloud service providers are normally acquires the software and hardware resources and the cloud consumers are avail the services through the internet access in lease basis. Cloud security is enhanced through cryptography technique applied to the cloud security to avoid vulnerability. The intractable computability is achieved in the cloud by using the public key cryptosystem. This paper proposed the approach of applying Hyper elliptic curve cryptography for data protection in the cloud with the small key size. The proposed system has the further advantage of eliminating intruder in cloud computing. Efficacy of the system is to provide the high security of the cloud data.

12. Vehicle detection in Aerial Surveillance.
Project Code : JIP1612                 Year : 2016 (IEEE)

Abstract
We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixel wise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixel wise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and non-vehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixel wise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

13. Photo Morphing Detection.
Project Code : JIP1613       Year : 2016 (IEEE)

ABSTRACT : In this digital world we come across many image processing software that produce doctored Images with high sophistication, which are manipulated in such a way that the tampering is not easily visible to naked eye. The authenticity of a digital image has become a challenging task due to the various tools present in the photo editing software packages. There are number of ways of tampering an Image, such as splicing two different images together, removal of objects from the image, addition of objects in the image, change of appearance of objects in the image or resizing the image. This Image Morphing detection technique detects traces of digital tampering in the complete absence of any form of digital watermark or signature and is therefore referred as passive. So there is a need for developing techniques to distinguish the original images from the manipulated ones, the genuine ones from the doctored ones. In this paper we describe a novel approach for detecting Image morphing. The new scheme is designed to detect any changes to a signal. We recognize that images from digital cameras contain traces of re-sampling as a result of using a color filter array with demosaicing algorithms. Our results show that the proposed scheme has a good accuracy in locating tampered pixels

14. Video watermarking by DCT algorithm.
Project Code : JIP1614                Year : 2016 (IEEE)

ABSTRACT: Video data hiding is still an important research topic due to the design complexities involved. We propose a new video data hiding method that makes use of erasure correction capability of Repeat Accumulate codes and superiority of Forbidden Zone Data Hiding. Selective embedding is utilized in the proposed method to determine host signal samples suitable for data hiding. This method also contains a temporal synchronization scheme in order to withstand frame drop and insert attacks. The proposed framework is tested by typical broadcast material against MPEG-2, H.264 compression, frame-rate conversion attacks, as well as other well-known video data hiding methods. The decoding error values are reported for typical system parameters. The simulation results indicate that the framework can be successfully utilized in video data hiding applications.

15. Content Based Image Retrieval in Global server.
Project Code : JIP1615                 Year : 2016 (IEEE)

Abstract : The content-based image retrieval (CBIR) is the most acceptable and often used image retrieval method, because it can be used to manage imagdatabase efficiently and effectively. The CBIR methods usually retrieve the images by image features. In this paper, we exploit a region called affine invariant region (AIR) as an image feature to help effectively retrieving the images which have been attacked or processed. Moreover, we use vector quantization to reduce the features comparison for improving the retrieval efficiency. The experimental results show that the method with high recall and precision is promising.

16. Reversible Data Hiding: Advances in the Past Two Decades.
Project Code : JIP1616                         Year : 2016 (IEEE)

Abstract —In the past two decades, reversible data hiding (RDH), also referred to as lossless or invertible data hiding, has gradually become a very active research area in the field of data hiding. This has been verified by more and more papers on

increasingly wide-spread subjects in the field of RDH research that have been published these days. In this survey paper the various RDH algorithms and researches have been classified into the following six categories: 1) RDH into image spatial domain, 2) RDH into image compressed domain (e.g., JPEG), 3) RDH

suitable for image semi-fragile authentication, 4) RDH with image contrast enhancement, 5) RDH into encrypted images, which is expected to have wide application in the cloud computation, and 6) RDH into video and into audio. For each of these six categories, the history of technical developments, the current state of the arts, and the possible future researches are presented and discussed. It is expected that the RDH technology and its applications in the real word will continue to move ahead.

17. On the Properties of Non-media Digital Watermarking: A Review of State of the Art Techniques.
Project Code : JIP1617                       Year : 2016 (IEEE)

Abstract —Over the last 25 years, there has been much work on multimedia digital watermarking. In this domain, the primary limitation to watermark strength has been in its visibility. For multimedia watermarks, invisibility is defined in human terms (that is, in terms of human sensory limitations). In this paper, we review recent developments in the non-media applications of data watermarking, which have emerged over the last decade as an exciting new sub-domain. Since by definition, the intended receiver should be able to detect the watermark, we have to redefine invisibility in an acceptable way that is often application-specific and thus cannot be easily generalized. In particular, this is true when the data is not intended to be directly consumed by humans. For example, a loose definition of robustness might be in terms of the resilience of a watermark against normal host data operations, and of invisibility as resilience of the data interpretation against change introduced by the watermark. In our paper, we classify the data in terms of data mining rules on complex types of data such as time-series, symbolic sequences, data streams and so forth. We emphasize the challenges involved in non-media watermarking in terms of common watermarking properties including invisibility, capacity, robustness, and security. With the aid of a few examples of watermarking applications, we demonstrate these distinctions and we look at the latest research in this regard to make our argument clear and more meaningful. As the last aim, we look at the new challenges of digital watermarking that have arisen with the evolution of big data.

18. Remote Authentication via Biometrics: A Robust Video-Object Steganographic Mechanism Over Wireless Networks.
Project Code : JIP1618

ABSTRACT In wireless communications, sensitive information is frequentlyexchanged, requiring remote authentication. Remote authentication involves the submission of encrypted information, along with visual and audio cues (facial images/videos, human voice, and so on). Nevertheless, Trojan horse and other attacks can cause serious problems, especially in the cases of remote examinations (in remote studying) or interviewing (for personnel hiring). This paper proposes a robust authentication mechanism based on semantic segmentation, chaotic encryption, and data hiding. Assuming that user X wants to be remotely

authenticated, initially X's video object (VO) is automatically segmented, using a head-and-body detector. Next, one of X's biometric signals is encrypted by a chaotic cipher. Afterwards, the encrypted signal is inserted to the most signicant wavelet coefcients of the VO, using its qualied signicant wavelet trees (QSWTs). QSWTs provide both invisibility and signicant resistance against lossy transmission and compression, conditions that are typical of wireless networks. Finally, the inverse discrete wavelet transform is applied to provide the stego-object. Experimental results regarding: 1) security merits of the proposed encryption scheme; 2) robustness to steganalytic attacks, to various transmission losses and JPEG compression ratios; and 3) bandwidth efciency measures indicate the promising performance of the proposed biometrics-based authentication scheme.

19. High-speed visual tracking with mixed rotation invariant description.
Project Code : JIP1619                         Year : 2016 (IEEE)

Abstract : 

Visual target tracking is widely applied in visual surveillance,

human–computer interaction, visual navigation and activity analysis.

However, the response speed of conventional tracking systems is

limited to <60 fps due to serial processing. Some researchers adopt parallel

single-instruction-multiple-data (SIMD) processors to speed up

tracking algorithms [1–3]. However, these processors can only carry

out simple algorithms such as background subtraction, segmentation

and motion detection, thus they can only be applied to certain sceneries

with a clean background. The local binary pattern (LBP) histogram of

gradient (HOG) feature description is widely used in target detection

and tracking [4, 5]. However, both the HOG and LBP histograms are

rotation variant, which results in target shifting and tracking failure. In

this Letter, we propose a mixed rotation invariant description

(MRID)-based tracking algorithm and a novel high-speed visual tracking

system. This MRID is invariant to rotation and illumination changes so

that it achieves more robust tracking than previously reported fast tracking

algorithms. The proposed tracking system integrates processors with

pixel and row-level parallelism to speed up the tracking algorithm. The

system with hierarchical parallelism can achieve over 1000 fps processing

speed.

20. Multimodal BCIs: Target Detection, Multidimensional
Control, and Awareness Evaluation in Patients With Disorder of Consciousness Despite rapid advances in the study of brain-computer.
Project Code : JIP1620                        Year : 2016 (IEEE)

ABSTRACT | Despite rapid advances in the study of brain– computer interfaces (BCIs) in recent decades, two fundamental challenges, namely, improvement of target detection performance and multidimensional control, continue to be major

barriers for further development and applications. In this paper, we review the recent progress in multimodal BCIs (also called hybrid BCIs), which may provide potential solutions for addressing these challenges. In particular, improved target detection can be achieved by developing multimodal BCIs that utilize multiple brain patterns, multimodal signals, or multisensory stimuli. Furthermore, multidimensional object control can be accomplished by generating multiple control signals from different brain patterns or signal modalities. Here, we highlight several representative multimodal BCI systems by analyzing their paradigm designs, detection/control methods, and experimental results. To demonstrate their practicality, we report several initial clinical applications of these multimodal BCI systems, including awareness evaluation/detection in patients with disorder of consciousness (DOC). As an evolving research area, the study of multimodal BCIs is increasingly requiring more synergetic efforts from multiple disciplines for the exploration of the underlying brainmechanisms, the design of new effective paradigms and means of neurofeedback, and the expansion of the clinical applications of these systems.

21. Ontology-Based Semantic Image Segmentation Using Mixture Models and Multiple CRFs.
Project Code : JIP1621                        Year : 2016 (IEEE)

Abstract —Semantic image segmentation is a fundamental yet challenging problem, which can be viewed as an extension of the conventional object detection with close relation to image segmentation and classification. It aims to partition images into

non-overlapping regions that are assigned predefined semantic labels. Most of the existing approaches utilize and integrate lowlevel local features and high-level contextual cues, which are fed into an inference framework such as, the conditional random field (CRF). However, the lack of meaning in the primitives (i.e., pixels or superpixels) and the cues provides low discriminatory capabilities, since they are rarely object-consistent. Moreover, blind combinations of heterogeneous features and contextual cues exploitation through limited neighborhood relations in the CRFs tend to degrade the labeling performance. This paper proposes an ontology-based semantic image segmentation (OBSIS) approach that jointly models image segmentation and object detection. In particular, a Dirichlet process mixture model transforms the low-level visual space into an intermediate semantic space, which drastically reduces the feature dimensionality. These features are then individually weighed and independently learned within the context, using multiple CRFs. The segmentation of images into object parts is hence reduced to a classification task, where object inference is passed to an ontology model. This model resembles the way by which humans understand the images through the combination of different

cues, context models, and rule-based learning of the ontologies. Experimental evaluations using the MSRC-21 and PASCAL VOC’2010 data sets show promising results.

22. Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement.
Project Code : JIP1622                         Year : 2016 (IEEE)

ABSTRACT  : Video and images acquired by a visual system are seriously degraded under hazy and foggy weather, which will affect the detection, tracking, and recognition of targets. Thus, restoring the true scene from such a foggy video or image is of signicance. The main goal of this paper was to summarize current

video and image defogging algorithms. We rst presented a review of the detection and classication method of a foggy image. Then, we summarized existing image defogging algorithms, including image restoration algorithms, image contrast enhancement algorithms, and fusion-based defogging algorithms. We also presented current video defogging algorithms. We summarized objective image quality assessment methods that have been widely used for the comparison of different defogging algorithms, followed by an experimental comparison of various classical image defogging algorithms. Finally, we presented the problems of video and image defogging which need to be further studied.

23. Learning Sampling Distributions for Efficient Object Detection
Project Code : JIP1623                        Year : 2016 (IEEE)

Abstract —Object detection is an important task in computer vision and machine intelligence systems. Multistage particle windows (MPW), proposed by Gualdi et al., is an algorithm of fast and accurate object detection. By sampling particle windows (PWs) from a proposal distribution (PD), MPW avoids exhaustively scanning the image. Despite its success, it is unknown how to determine the number of stages and the number of PWs in each stage. Moreover, it has to generate too many PWs in the initialization step and it unnecessarily regenerates too many PWs around object-like regions. In this paper, we attempt to solve the problems of MPW. An important fact we

used is that there is a large probability for a randomly generated PW not to contain the object because the object is a sparse event relative to the huge number of candidate windows. Therefore, we design a PD so as to efficiently reject the

huge number of nonobject windows. Specifically, we propose the concepts of rejection, acceptance, and ambiguity windows and regions. Then, the concepts are used to form and update dented uniform distribution and a dented Gaussian distribution. This contrasts to MPW which utilizes only on region of support. The PD of MPW is acceptance-oriented whereas the PD of our method (called iPW) is rejection-oriented. Experimental results on human and face detection demonstrate the efficiency and the effectiveness of the iPW algorithm. The source code is publicly

accessible.

If You want More Latest Topic With IEEE Paper & Abstract Please Download Project List and Send to us topics name we'll send you IEEE Paper, Abstract, PPT etc.