Last Updated | November 17, 2023
Overview of Skin Cancer Detection Using Convolutional Neural Networks
Are you looking for Skin cancer detection? Let’s dive in. The past few years have been challenging for all of us. Perceiving from a comprehensive perspective, we can say that there are a variety of diseases that cause numerous fatalities and the primary cause for the deaths from such diseases is usually a delayed diagnosis. Cancer is one of those diseases that are usually diagnosed in the later stages. Considering a synopsis in which a person has a lesion on their skin or undergoes pain in the chest or any other symptom, generally, it is not taken seriously, and performs self-medication to avoid that symptom instead of consulting the doctor. Another reason for patients not visiting doctors is their fear of going to the doctor when they see such sorts of symptoms in their bodies. Therefore, there is a need to develop a solution that can help society to identify such symptoms.
We aim to develop a system that provides the most feasible solution for the issue. A system that can scrutinize your lesion comes up with an instant peril assessment and suggests the next best possibility to cope with the situation. Moreover, it can also counsel you regarding suitable hospitals. Our proposed system discerns the early symptoms of skin cancer when it is medicable and directs you to affordable treatment options. With the help of the proposed solution, we can provide a pre-evaluation for this disease which will help in diminishing the panic surrounding the issue. However, the application will not suggest any medication or treatment for the diseases.
Skin cancer is the abnormal growth of skin cells. There are three major types of skin cancer:
- Basal cell carcinoma.
- Squamous cell carcinoma.
Early detection of the first two causes gives you the best chance for successful treatment. Check your skin for suspicious changes to help detect skin cancer at its earliest stages. Melanoma can develop anywhere on your body, in otherwise normal skin or in an existing mole that becomes cancerous. Melanoma most often appears on the face or the trunk of affected men. Merkel cell carcinoma causes firm, shiny nodules on or beneath the skin.
If skin cancer is caught early, your dermatologist can treat it with little or no scarring and high odds of eliminating it. The main causes of skin cancer are the sun’s harmful ultraviolet (UV) rays and the use of UV tanning beds.
Transfer learning is the reuse of a pre-trained model on a new problem. It’s currently very popular in deep learning because it can train deep neural networks with little data. We’ll take a look at what transfer learning is, how it works, and why and when it should be used. In transfer learning, the early and middle layers are used and we only retrain the latter layers. It helps leverage the labeled data of the task it was initially trained on. In the future, this could be used in medical imaging software.
Machine Learning Using Convolutional Neural Networks
ML algorithms are trained on data to identify patterns and make predictions. Machine learning in healthcare is being used to develop new diagnostic tools, improve treatment outcomes, and personalize care.
One area where ML is having a major impact is in skin cancer detection. Skin cancer is the most common type in the United States, and early detection is crucial for successful treatment. However, diagnosing skin cancer can be challenging, even for experienced dermatologists.
Convolutional neural networks (CNNs) are a type of ML algorithm that is particularly well-suited for image analysis. CNNs have been shown to be highly effective at detecting skin cancer in dermoscopic images, which are magnified images of the skin taken with a specialized camera.
Fig. 1. An example of Convolutional Neural Network
After setting the research goal, the part of retrieving the data seemed quite tedious and challenging. Even though the internet is now filled with datasets, finding one that we deemed fit for our goal in a myriad of datasets was difficult. We were unable to find a single dataset that would be fit for our goal and thus, we decided to take multiple datasets and work on them together. For this research data images have been used in .jpeg and .png formats. Metadata has also been used in CSV files outside of the DICOM standard. This dataset comprises pictures of both benign and malignant skin moles in a balanced mix. For each image in the datasets, the boolean target has been predicted. The model estimates the probability between 0 to 1. In the train datasets (train.csv) benign has been denoted by 0 and malignant by 1. Train.csv contains the training data for the model and Test.csv contains the testing data for the model. In the “image name” column, data contain the unique filename of each image. In the “patient ID” column, it contains the unique patient ID. The “Sex” column contains the gender of the patient and in case gender is unknown then the entries have been left empty. The “age approx” column contains the patient’s age when the images were taken. “anatomy site general challenge”, contains the location where images have been taken. “Diagnosis” has a complete detailed description for training only. The “Benign malignant”, column shows the indication of malignancy of the imaging lesion. “Target”, contains the boolean of the target data point.
Fig. 2. Sample Dataset
In our proposed architecture, we will be using several different modules through which our test image will be going through. First, we have a user-friendly front-end application for the user to capture or upload his image of the skin lesion. Next, we have a CNN model which will be used to classify the image as benign or malignant.
The project starts working when users open the application on a compatible mobile phone. Then the user either takes the image of the skin lesion with the help of a camera or uploads the image from the gallery. After uploading or capturing, the user then clicks to analyse the image for classification. At the click event, the image is sent to the convolutional neural network. The convolutional neural network then takes the image and passes the image through the network. This model generates a percentage (also called score) which is between zero to a hundred. The result is then returned to the User interface from where the user analyzes the result. This whole flow can be seen in Figure 3.
Fig. 3. Software Flow Diagram
We took the data and split the data into test and train folders respectively using Windows Explorer. After that, for Data Retrieval, we used the image generator from the Keras library and used the image generator to divide the training set into train and validation sets with a fraction of 0.2 for the validation set. Along with this, we used the rescaling factor 1./255 for each of the images received so that we do not lose any sort of information from the images retrieved. We kept the image size that would be retrieved to (224, 224, 3) and we also used the batch size of 64 which we found is the most optimal.
In this project, we have used three models, for two of the models we have used the transfer learning technique. For these models, we have used InceptionV3 and Resnet50. The inception architecture of GoogLeNet was designed to perform well even under strict constraints on memory and computational budget. The complexity of the network makes it more difficult to make changes to the network. The generic structure of the Inception-style building blocks is flexible enough to incorporate those constraints naturally. This is enabled by the generous use of dimensional reduction and parallel structures of the Inceptions modules which allows for mitigating the impact of structural changes on nearby components. ResNet was the winner of the ImageNet challenge in 2015. The model first introduced the concept of skip connection. Inception or GoogleNet had 152 layers. It has been used to train very deep networks with 150+ layers.
Except for these pre-trained neural networks, we also used a custom model which we made using the Sequential feature of keras. We used multiple blocks in which each block consisted of two convolutional layers and a single MaxPool layer with reLU as an activation function and a valid padding along with a stride of 1 size and 32 filters. We used five of such layers followed by a batch normalization layer. After that, the results are flattened followed by multiple dense layers and a single dropout layer. The custom model can be seen in Figure 4.
Fig. 4. Custom Model Details
Results and Presentation
After training our dataset on the aforementioned models separately, we got results for each of the models which can be seen in Figure 5. We can see that the results obtained from our custom model vary in a very bad manner for the validation data but perform extremely well for the training data. Whereas for the inception V3 model we can see that the model is learning something for both training and validation data which is a good performance on the given dataset and reduced the chance of over-fitting along with under-fitting. And in the last model which is Resnet. We can see that for the validation dataset, the model learns nothing as the validation loss is much less from the beginning. Therefore, after observing the results from all three models, we can conclude that the InceptionV3 model performs well as compared to the other two models on our problem.
After getting different results from each classifier, we concluded that the best classifier to use for our research problem is the Inception V3 classifier as it performs very well on our problem and gives good results. The Call to Action focuses on reducing UV exposure, with an emphasis on addressing excessive, avoidable, or unnecessary UV exposures (such as prolonged sun exposure without adequate sun protection) and intentional exposure for skin tanning. The U.S. has one of the highest rates of skin cancer in the world. Indoor tanning devices expose users to intense UV radiation as a way to tan the skin for cosmetic reasons. Therefore, this project will help dozens of people to identify their disease on time and take action accordingly.
Fig. 5. Summary Results
The Use of AI In The Field of Skin Cancer
There are ways to detect and diagnose such diseases and one of them is by using AI.
What is AI in healthcare? AI is being used in healthcare in a variety of ways, including to help diagnose diseases, develop new treatments, and improve patient care.
One of the most promising applications of AI in healthcare is in the field of skin cancer detection. Skin cancer is the most common type, and early detection is essential for successful treatment. AI-powered skin cancer detection systems can help dermatologists to identify suspicious lesions more accurately and efficiently.
However, there are also some associated cost of AI in healthcare. The initial development and deployment of AI systems can be expensive, and there is also the cost of training and maintaining these systems. Additionally, there is a risk that AI systems could generate false positives or false negatives, which could lead to unnecessary referrals or missed diagnoses.
Despite these costs, the potential benefits of AI for skin cancer detection are significant. AI systems can help to improve access to care for patients in rural or underserved areas, and they can also help to reduce the workload on dermatologists. Additionally, as AI technology continues to develop, the cost of AI systems is expected to come down.
The Benefits of AI To Detect Skin Cancer
The AI deliver value based care in healthcare, and skin cancer detection is no exception. AI-powered algorithms are more accurate than human specialists at identifying worrisome lesions after analyzing vast volumes of data, including dermoscopic pictures. This may facilitate earlier skin cancer detection, which is crucial for effective treatment.
There are several key benefits of AI in healthcare for skin cancer detection:
Greater accuracy: AI systems have a huge accuracy edge over human experts since they can be taught on large datasets of dermoscopic pictures. Studies have demonstrated that for the identification of skin cancer, AI systems can obtain sensitivity and specificity rates of over 90%.
Efficiency gain: AI systems can process a lot of photos fast and effectively, which can help dermatologists and other healthcare practitioners handle more patients. Patients may experience reduced wait times as a result, and resources may be used more effectively.
Cost savings: By enhancing early cancer diagnosis, AI systems have the potential to lower the overall cost of skin cancer treatment. This could result in faster and more effective treatment as well as fewer unneeded biopsies and other operations.
Access is improved: AI-powered skin cancer detection systems can be installed in distant and underserved areas, facilitating improved patient access to high-quality care.
This model can be further extended to a Time Series model and by using a time series we can predict the number of cancer cases expected in the next years. Time series model can also be used on the data that can be obtained from the international health websites which also includes data from hospitals. This data can be used to predict the expected number of cases along with the underlying possible cause of skin cancer detection in the patients.
 V, Bhavya G, Narasimha M, Ramya Y, Sujana AnuRadha, T.. (2018). Classification of skin cancer images using TensorFlow and Inception v3. International Journal of Engineering Technology. 7. 717. 10.14419/ijet.v7i2.7.10930.
 Mohammad Ali Kadampur, Sulaiman Al Riyaee, Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images, Informatics in Medicine Unlocked, Volume 18, 2020, 100282, ISSN 2352-9148, https://doi.org/10.1016/j.imu.2019.100282.
 A. Naeem, M. S. Farooq, A. Khelifi and A. Abid, ”Malignant Melanoma Classification Using Deep Learning: Datasets, Performance Measurements, Challenges and Opportunities,” in IEEE Access, vol. 8, pp. 110575-110597, 2020, doi: 10.1109/ACCESS.2020.3001507.
 Do, T-T., Hoang, T., Pomponiu, V., Zhou, Y., Chen, Z., Cheung, N-M., Koh, D., Tan, A., Tan, S-H. (2018). Accessible melanoma detection using smartphones and mobile image analysis. IEEE Transactions on Multimedia, 20(10), 2849-2864. https://doi.org/10.1109/TMM.2018.2814346
 Rezvantalab, Amirreza Safigholi, Habib Karim ijeshni, Somayeh. (2018). Dermatologist Level Dermoscopy Skin Cancer Classification Using Different Deep Learning Convolutional Neural Networks Algorithms.